I first started working with SSDs about twelve years ago (gee…now that I write that down, it is a long time!). From their outset, SSDs brought unique benefits into client and enterprise storage – high speed, low power and low latency to drive emerging (and standard) applications toward new performance thresholds. Back then SSDs were small, so we often used them as caches for HDD arrays (read and write caches).
SSDs also introduced a new concept into the storage market: they were storage devices that wore as they were written. In the early days of SSD adoption (around 2007), hard disk drives (‘HDDs’) didn’t do this, they had no warranted endurance rating. As HDD capacity grew, HDD warranted endurance ratings are much more broadly published today.
In this post I’m focused only on warranty statements related to endurance. These statements are often found in product manuals or datasheets. I’m not talking about what you might do when those values are reached - that’s your call.
Although current at the time of publication, values used in this post are subject to change. Your results may differ from those stated herein. I’ll use the terms “warranted endurance” and “endurance” interchangeably.
SSDs Wear Differently
The NAND media in SSDs is different from the magnetic media in HDDs. On an HDD, if there is already (old) data in the physical location to be written - data that can be overwritten - new data directly overwrites it. It is a single step process.
On an SSD when there is (old) data present (again, data that we can overwrite), we must erase the NAND before we can write it (program it).
This two-step process is called a “program/erase” cycle (‘P/E cycle’). SSD endurance depends on the number of P/E cycles of the NAND and how the SSD’s firmware handles that wear.
NAND wears when it is written (this becomes important in the following sections), so reading NAND incurs negligible wear.
Endurance Requirements are Changing for Storage
We usually discuss SSD wear as Drive Writes Per Day, or ‘DWPD’ (the number of times the drive can be completely overwritten each day during its warranty period).
In the early days of SSDs, we (I built iSCSI disk arrays with Enterprise SSD accelerators), 10, 20 or more DWPD requirements were common. That’s changed. Figure 1 shows how.
See the blue line growth? Those are SSDs with less than 1 DWPD.
Different SSDs are made from different NAND. The most fundamental difference among NAND types is the number of bits stored in each cell (one, two, then three and now four).
Figure 2 gives an idea of how complexity changes in write endurance with the number of bits per cell (different types of NAND). SLC, MLC, TLC and QLC are typically rated to support different numbers of P/E cycles.
Generally speaking: As the number of bits per cell increases, the number of P/E cycles for which the NAND is rated decreases.
Advances in NAND technology spurred the enterprise SSD market to broader adoption while at the same time helping users rethink endurance requirements and re-analyze workloads based on how much data they write.
Combined with the reduced $/GB and increased per-drive, more workloads could take advantage of SSDs.
Workloads that wrote less became prime candidates for denser NAND.
This culminated with our release of Quad Level Cell (QLC) NAND and its ability to store four bits per cell, and Micron releasing the first QLC SSD to the market in 2018: our 5210 ION.
Welcome Workload Limit Ratings (for HDDs)
Many HDDs now have workload limit ratings, but they are very different from SSD DWPD ratings. While SSD warranties typically state endurance as the amount of data written, HDD warranties typically state workload limits in terms of bytes written and/or read (The reasons for HDDs adopting specific workload limit ratings are beyond the scope of this post).
This means that different IO types wear SSDs and HDDs differently as shown below.
HDD Workload Limits
To give an idea of HDD endurance, we’ll be looking at two enterprise-class 7200 RPM HDDs (note that other HDD types may have different endurance ratings. These HDDs are only examples, exact workload limits may vary by manufacturer, model, capacity, generation or many other factors - some models don’t have a specified workload limit - your results may vary).
Table 2 shows relative values and datasheet workload limit ratings for these example HDDs (workload limit ratings are typically expressed in TB/year). Since HDD workload limit ratings include both read and write IOs, we have to use a new abbreviation to express HDD workload limits in SSD-familiar terms. We’ll use DRWPD (Drive Read or Write Per Day).
The DRWPD value in column 3 was derived from each drive’s datasheet workload rating using simple math.
DRWPD is not a frequently-used industry term, but bear with me! In the age of QLC SSDs, it likely will become one. I use the term here to make more direct comparisons with SSD DWPD ratings.
We can graph HDD workload limits as DRWPD in Figure 4.
See how the DWPD is constant for these drives? Note too that Figure 4 assumes that the HDD can generate sufficient IO to realize these DRWPD values. This may not be the case with all HDDs.
Starting the Great Endurance Race: Comparing SSD DWPD and HDD DRWPD
Remember how SSDs wear when written, not when read? Yes, read disturb is a phenomenon of SSDs that incurs a very slight amount of wear, but relative to application use, it is negligible so we can safely ignore its wear.
SSDs incur more wear when IOs are small and randomly placed. The opposite is also true – they incur less wear when the IOs are large and sequentially placed (see this Micron brief for details).
Prior to the introduction of QLC NAND technology, SSDs were rated at a fixed DWPD value. That means that their DWPD ratings did not change with applied workload.
QLC technology was first shipped into the enterprise SSD market with our 5210 ION - the first such SSD ever shipped. QLC NAND wear characteristics and workload understanding had matured, so expressing QLC-based SSD endurance as a workload-specific value made more sense. This means the rated DWPD of the Micron 5210 ION SSD varies based on the type of write IO.
Figure 5 shows how. The DWPD of a 7.68TB Micron 5200 ECO SSD (DWPD is workload independent in this TLC SSD datasheet), a 7.68TB Micron 5210 ION SSD (DWPD varies with workload), and the HDD DRWPD from Figure 4 are all shown.
OK, now I have to introduce a new term for comparisons to make sense. Note that Figure 5 uses “DxPD” to indicate that it shows both DWPD and DRWPD. When DxPD references an HDD, I mean DRWPD. When DxPD references an SSD, I mean DWPD.
Figure 5 compares only enterprise SATA drives (SSD and HDD). This is to ensure a fair comparison.
See how the Micron 5210 SSD DWPD ratings trend up and to the right? That indicates that as the data write pattern changes from small random IO (at left) to larger, sequential IO (right), DWPD increases on the 5210 SSD. Micron 5200 ECO SSD and HDD DWPD datasheet ratings do not follow this trend.
- The Micron 5210 SSD and archive-class HDD have similar DxPD if all write traffic is small (4K) and random. When the IO size reaches 8KB, the 5210 has higher DxPD (trend continues for all additional write patterns)
- The Micron 5210 SSD and enterprise-class HDD have similar DxPD when write traffic is 90% 128K sequential; the 5210 has higher DxPD for 100% 128K sequential writes.
- The Micron 5200 ECO SSD DxPD exceeds both HDDs for any write pattern shown.
Wow! That’s Cool! (but how do I use DxPD to help find the right drive, SSD or HDD?)
I’m glad you asked!
DWPD and DRWPD are ratios. This is especially relevant when evaluating a QLC SSDs since QLC technology packs 33% more bits into every cell and because QLC SSDs are typically only available in higher capacities (which inherently have lower DWPD ratings).
For example, a 960GB TLC SSD with a 1 DWPD rating delivers similar DWPD to a 1.92TB QLC SSD that has a 0.5 DWPD rating for a given workload. They’re basically the same DWPD! While the QLC SSD’s DWPD specification appears lower, the GB amount written per day is similar. Pretty cool, right?
It gets better!
There is a strong trend in high-growth applications to read far more data than they write. Many industry analyst firms have indicated very high growth rates for read-centric enterprise workloads ranging from Artificial Intelligence (AI), Machine Learning (ML), Big Data Analytics, Low-Ingest Ceph Block/Object Storage, some NoSQL workloads (profile caching, read latest, etc), Deep Learning and Business Intelligence.
These read-centric applications can be a very good fit for SSDs (which incur very little wear when written). However, if one replaces drives based on warrantied endurance values, one may have to be very selective when using HDDs for these same applications (as many HDDs incur wear when read and written). While some HDDs have no workload limit rating, many high-capacity enterprise HDDs have this rating, so care must be taken with HDD selection.
Get Started Today
SSDs and HDDs (with workload limit ratings) incur wear differently. SSDs wear when written – their wear tolerance is expressed as DWPD. HDDs are different. HDDs with a workload limit rating incur wear when read or written. In this post I expressed their wear tolerance as Drive Reads or Writes Per Day, DRWPD.
Understanding your workload by using analysis tools built into many operating systems can also help. These tools can show you what IOs are being sent to storage, providing a deeper understanding of how your applications are using storage and whether the applications are more write-intensive or read-intensive.
If you make drive replacement decisions based on rated endurance, we showed that for many workloads, SSD DWPD ratings equal or exceed the DRWPD of some capacity-focused enterprise HDDs, making SSDs a great fit for read-focused, emerging and traditional enterprise workloads.
Here are some additional content pieces if you’re interested in learning more:
Comparing SSD and HDD Endurance in the Age of QLC SSDs – White Paper
Micron 5210 ION Enterprise SATA QLC SSD – Product Brief
IDC: How New QLC SSDs Will Change the Storage Landscape – White Paper
Five reasons QLC belongs in your data center - Infographic
Where are you on your QLC journey? Let me know on Twitter @GreyHairStorage.