Why are swap partitions discouraged on SSD drives, are they harmful?
I often read that one should not place swap partitions on a SSD drive, as this may harm the device. Is this true? Can you please explain the reason to me? Because I otherwise would have thought that placing swap on an SSD is the best choice, as it’s much faster than HDDs and therefore swapping RAM contents to the SSD is not as slow as it would be with the HDD.
Why hasn’t anyone suggested adding noatime to /etc/fstab options to decrease writes? Why does everyone focus on swap?
7 Answers 7
Flash RAM cells in SSDs have a limited lifespan. Every write (but not read) cycle (or more accurately every erasure) wears a memory cell, and at some point it will stop working.
The amount of erase cycles a cell can survive is highly variable, and flash from modern SSDs will survive many more than flash from SSDs made several years ago. Additionally, the SSD intelligent firmware will ensure evenly distributed erasures between all the cells. In most drives, unused areas will also be available to backup damaged cells and to delay aging.
To have a value we can use to compare the endurance of a SSD, we can use lifespan measures such as the JEDEC published standards. A widely available value for endurance is TBW (TeraBytes Written, or alternatively total bytes written) which is the amount of bytes writeable before the drive fails. Modern SSDs can score as low as 20 TB for a consumer product but can score over 20,000 TB in an enterprise-level SSD.
Having said that, both the lifespan and the use of a SSD for swapping depends on several factors.
Systems with plenty of RAM
On a system with plenty of RAM and few memory consuming applications we will almost never swap. It is merely a safety measure to prevent data loss in case an application ate up all our RAM. In this case, the wearing of a SSD from swapping will not be an issue. However, having this mostly-unused swap partition on a conventional hard drive will not lead to any performance drop, so we can safely put our swap partition (or file) on that significantly cheaper hard drive and use the space on our SSD for something more useful.
Systems with little RAM
Things are different on a system where RAM is sparse and cannot be upgraded. In this case, swapping may indeed occur more often, especially when we run memory-intensive applications. In these systems, a swap partition or file on a SSD may lead to a dramatic performance improvement at the cost of a somewhat shorter SSD lifespan. This decreased lifespan may, however, still not be short enough to warrant concern. In all likelihood, the SSD may be replaced long before it would’ve died because several times the storage may be available at a fraction of today’s prices.
Hibernating our system
Waking from hibernation is indeed very fast from a SSD. If we’re lucky and our system survives a hibernation without issues, we can consider using an SSD for that. It will wear the SSD more than just booting from it would, but we may feel it’s worth it.
But booting from an SSD may not take much longer than waking from hibernation from an SSD, and it will wear the SSD far less. Personally, I don’t hibernate my system at all — I suspend to RAM or quickly boot from my SSD.
The SSD is the only drive we have
We don’t really have a choice in this case. We don’t want to run without a swap, so we have to put it on the SSD. We may, however, want to have a smaller swap file or partition if we don’t plan to hibernate our system at any point.
Note on speed
SSDs are best at quickly accessing and reading many small files and are superior to conventional hard drives for transferring data from sequentially-read small or medium-sized files. A fast conventional hard drive may still perform better than an SSD at writing (and to a lesser extent reading) large audio or video streams or other long unfragmented files. Older SSDs may have their performance decline over time or after they are fairly full.
So we can conclude that we should use SSDs preferably to store data that ideally are written once (ore rarely) and need to be read pretty often. Like the system files, program files or the home folder’s data directories (Music, Videos, . ). Is there a rough number on how many write/erase cycles a modern average SSD cell should survive? 1000? 10000? And yes, I understood that the controller tries to distribute usage evenly among all cells to enlarge the life span.
I think the note on speed is misleading — SSD’s will outperform hard drives — the fastest (SAS, 15000 rpm) drives provide a seqential access speed of about 250MB/sec while an SSD provides almost double that (and a regular hard drive pales in comparison at about 110MB/sec — SSD’s will outperform hard drives in sequential reads, the question is one of cost.
@davidgo it is true that In general an SSD will perform much better than a hard drive, especially if the SSD was new. There are however reports on quite significant performance drops not only from an ageing SSD but also from sequential reading over a longer time span (such as video streaming). See e.g. this post on SU and in depth explanation from seagate. So we should not rely on the initial great values to last forever.
Interesting post from Seagate, however this was written circa 2010 — and SSD’s came a long way between then and 2015 — particularly with respect of background garbage collection and wear leveling — which drastically alter the landscape — see techreport.com/review/27909/… from as way back as 2013. Also, back in 2010, ssd controller cards were a lot buggier.
Early SSDs had a reputation for failing after fewer writes than HDDs. If the swap was used often, then the SSD may fail sooner. This might be why you heard it could be bad to use an SSD for swap.
Modern SSDs don’t have this issue, and they should not fail any faster than a comparable HDD. Placing swap on an SSD will result in better performance than placing it on an HDD due to its faster speeds.
Additionally, if your system has enough RAM (likely, if the system is high-end enough to have an SSD), the swap may be used only rarely anyway.
I am tempted to believe this, but I want to wait for further reactions and would also appreciate any references as proof. I will accept an answer when there is valid proof or a clear majority for one point.
While not statistically valid for life of your SSD. SSD life test 2015 final techreport.com/review/27909/… Shows life is very long even for the ones that failed first. I have had hard drives fail within a year, but that is not normal. My 4GB of RAM system almost never used swap anyway.
My question here is: If I have an SSD AND a good amount of ram, should I try to constain my application(s) within the RAM like in the old HDD days, or can I let it run wild with SSD. Seems like it’s still worth it but maybe not.
It is absolutely untrue that modern SSDs «don’t have this issue.» It is intrinsic to NVME technology that cells can accommodate fewer writes (from 600 to 2,000 for non-SLC drives). An active swap partition can consume this in a few hours. Even with write leveling, the lifetime of an SSD drive can be depleted in less than a day of active swapping).
Even if you have enough RAM, you might still want to prevent any file copy or search to swap out the applications from RAM. This might be the case on file servers (NAS, SAMBA, FTP) which might be involved in large file operations.
In order to do that it’s best to set in /etc/sysctl.conf :
vm.swappiness=1 vm.vfs_cache_pressure=50
The first setting prevents disk cache (e.g. doing cp ) from swapping out existing apps from RAM. The normal default setting on that is 60. Note that using 0, although more aggressive, has been sometimes reported to generate out-of-memory errors.
The second setting prevents file searches (e.g. doing find ) from swapping out existing apps from RAM. The normal default setting for that is 100.
Although the author mentioned in reference does not refer explicitly to SSD’s, this approach also reduces wear on SSD due to reduced swapping and he also provides example how to test it.
vm.vfs_cache_pressure=50 , my free becomes less and less. Run echo 3 | sudo tee /proc/sys/vm/drop_caches didn’t change much until I remembered that I set this
HDD technology uses a magnetic process for data manipulation and storage. This process is noninvasive, meaning you can pretty much manipulate data on a disk drive infinitely. That is until the mechanics start to fail. In contrast SSD technology does not run the risk of mechanical failure. But what is a concern is how it stores its data. For data storage SSDs use controlled bursts of electrical energy. The semiconductors that are hit with this electric current slowly wear out from the process as they are used over time.
This process has been improved upon through software and hardware updates. Early adapters found that OS’s were not programmed to properly store data the way an SSD does. This adversely put the SSD through large amounts of read/write cycles. Also most older BIOSs do not properly recognize an SSD and this caused issues as well.
The introduction of UEFI and OS’s updates corrected most of the issues that early SSD owners had. Also, as with any production process, SSDs themselves have gotten better at managing and maintaining the degradation of NAND flash drives.
However, it is still a concern that your SSD has a limited amount of read/write cycles before it can no longer store data. Although, that concern is just as marginal as your HDD failing.
There’s a very in-depth podcast about the subject here if you’d like to read up on the topic further.
reading between the lines, an older bios , non uefi system, might then, not interact as efficiently with the ssd as a newer uefi based system?
Life Vs Performance Balance.
You bought an SSD for its performance advantages and not simply for increasing battery life right? So use your SSD for that very purpose, to make your system quicker.
If you can afford to add more RAM to reduce *swap I/O then this will clearly increase the life span of your SSD as another performance drain is obviously I/O cycles to swap space on a filesystem.
Again like so many aspects of your system configuration it’s often not down to one single rule adoption that fits all. User needs differ and as such system requirements and thus configuration must differ in order to meet these needs, put simply it boils down to how you configure your system.
If you have the space to hold an SSD in addition to your none SSD drive, then write files that will rarely change to your none SSD drive and keep often accessed files on your SSD drive.
This will ensure that …
[1] — The *trim features will have the resources to perform the necessary steps to evenly use all of the drive. [Benefit = Life]
[2] — Your I/O latency will be reduced with the high speed SSD device being used to access an often accessed filesystem. [Benefit = Performance]
Configure your temp filesystem to utilise space when required for your particular system needs, if you have enough RAM then consider setting your swappiness level to be less aggressive, this will ensure that…
[1] — SSD I/O is reduced yet your system will still meet the demands of its user(s). [Benefit = Life]
Do you really need all off those logs? Consider what your system is logging and where.
[1] – SSD I/O is reduced as log file access is reduced. [Benefit = Life & Performance]
There are a heap of other aspects to your system configuration that can make a none SSD system perform faster, default system builds have a tough metric to fulfil, pure performance or keeping data safe and secure or a balanced mixture of them all. If you apply the same mentality to what you write and to which device, you can drastically increase both performance and at the same time increase the life span of your SSD.
*swap — Remember this isn’t just used when resources are low, the swappiness configurable for many Linux distro’s out of the box by default will park long running processes of low priority further down the performance ladder into swap space)