Add Bookmark(s)



Bookmark(s) shared successfully!

Please provide at least one email address.

Micron® S650DC Enterprise SAS SSD and PostgreSQL OLTP: HDD-Crushing Performance With Compelling Economics

IT budgets shrink year on year, but demand continues to rise. As we generate more data, users and applications demand better access. These unrelenting challenges drive new tactics, new solutions and new answers. Because legacy platforms are no longer capable of meeting demands placed on them—IT needs a new approach that is cost-effective and minimally disruptive.

S650DC vs. HDDs Running PostgreSQL OLTP Workloads
1 Performance may vary depending on platform, user counts and software configuration
Serial Attached SCSI (SAS) has been the go-to interface for enterprise-class storage for generations—and with good reason. The SAS interface offers exceptional speed (up to 12 Gb/s currently), granular manageability and dynamic configuration options that are not available with interfaces like Serial Attached ATA (SATA), which is more limited.

Traditional SAS hard disk drives (HDDs) have been the enterprise standard for years, whether midrange 10,000 RPM or higher-performance 15,000 RPM. With the introduction of SAS SSDs like the Micron® S650DC, that standard is changing—rapidly.

Nowhere is the need for performance more evident than in high-throughput databases with random workloads like PostgreSQL© running Online Transaction Processing (OLTP).

In this technical brief, we compare a state-of-the-art Micron S650DC RAID 10 array (four 800GB Micron S650DC SAS SSDs) to the legacy stalwart—an HDD RAID 10 array (sixteen 15,000 RPM 300GB SAS HDDs) in a PostgreSQL OLTP workload. We measure and compare performance, latency and overall value using a specific platform configuration with relevant test details noted. While your exact results may vary, the S650DC clearly offers significant improvements over an HDD array.

PostgreSQL and OLTP

PostgreSQL is an enterprise-class, open-source, object-oriented relational database management system that offers outstanding reliability and robust, consistent data integrity. It supports a variety of workloads in industries ranging from bio/pharmaceutical, healthcare and education to finance, e-commerce, media, manufacturing and telecommunications.

One of the most demanding workloads PostgreSQL supports is OLTP, which is often used to manage transaction-based applications like order entry and fulfillment, real-time data acquisition, management, analysis and large-scale commercial processes—all of which require immediate access to business-critical data. To support these real-time OLTP applications, platforms require extremely fast transaction processing and ultra-low latency.

Accelerating transaction processing yields better outcomes:

  • Get more done: By executing each transaction more quickly, additional transactions can be processed and managed (within the same timeframe).
  • Get work done faster: Reducing storage platform response times and making them more consistent enables the relational database management system to respond better to user requests—users and/or automation systems aren’t idle as they wait for storage I/O processes to complete.

S650DC: From 22X (with 8 Users) to 29X (with 48 Users)
More Orders per Minute

Orders per minute is a good metric for OLTP database performance and is used to measure ‘business throughput’ for a given database.

Figure 1 shows measured orders per minute of the S650DC (four drive) array and the legacy HDD (16 drive) array. The horizontal axis is the total number of simultaneous users accessing the database (scaled from 8 on the far left to 641 on the far right2); the vertical axis is measured orders per minute. S650DC performance is in green; legacy HDD array in blue. In Figure 1, taller is better.

Orders per minute by drive and user count

Figure 1: Orders per Minute by Drive and User Count

1 64 users is the maximum number supported by PostgreSQL without additional software
2 Best performance and lowest latency were observed with 48 users; results depend on system variables and may differ

The S650DC array shows a significant performance advantage over the HDD array across all tested user counts.  When the system is lightly loaded (8 users) the S650DC array shows 22X the HDD array performance; when the system is heavily loaded (64 users) the S650DC arrays shows 27X the HDD array performance, with the peak difference (29X) seen at 48 users. The performance differences between the S650DC and HDD arrays across all measured user counts is shown in Table 1, expressed as the S650DC array orders per minute divided by HDD array orders per minute for each user count.

User Count S650DC Array
Performance Advantage
8 22X
16 27X
24 28X
32 28X
40 29X
48 29X
56 27X
64 27X

Table 1: S650DC Relative Performance

S650DC: From 95% (With 8 Users) to 97% (With 48 Users)
More Responsive

While orders per minute expresses database performance, average response time shows how quickly users and applications can interact with the database. One key reason to migrate from legacy HDD arrays to SSDs like the S650DC is to substantially lower the database response time. By reducing response time, both users and applications that rely on database processing can get their queries satisfied sooner and make use of the data faster.

These response time differences are illustrated in Figure 2, where the average response time is on the vertical axis and the number of users accessing the database is on the horizontal axis. S650DC latency is in green; legacy HDD array latency is in blue. In Figure 2, lower is better.

Average response time

Figure 2: Average Response Time

When the system is lightly loaded (8 users) the S650DC array shows 95% better average response time compared to the HDD array. When the system is heavily loaded (64 users), the S650DC arrays shows 96% better average response time than the HDD array, with the peak difference (97%) seen at 48 users. The range of response time advantage of the S650DC array over the HDD array across all measured user counts is shown in Table 2, expressed as the S650DC average response time divided by the HDD array average response time for each user count.

User Count S650DC Array
Average Response Time Advantage
8 95%
16 96%
24 96%
32 96%
40 96%
48 97%
56 96%
64 96%

Table 2: S650DC Average Latency Improvement

S650DC: From 49% (56 Users) to 70% (64 Users) Better Consistency

Smooth application performance relies on consistent latency from the database itself. For many OLTP use models, consistent latency can be as important as low average latency.

We characterized latency consistency by measuring the 99.9th percentile response time for each array type. Latency consistency behaved somewhat differently from other measurements in that value neither consistently increased nor decreased with increasing user count. This section and Table 3 detail the different latency consistency versus user count observations.

Figure 3 shows the 99.9th percentile latency of both array types—with user count increasing along the horizontal axis (from 8 up to 64, as before) and 99.9th percentile latency along the vertical axis (measured in ms).

99.9th percentile response time

Figure 3: 99.9th Percentile Response Time

The largest differences are between 49% (56 users) and 70% (64 users), but as the results are non-linear with user count, all the results are shown in Table 3. Regardless of the user count, the S650DC array also shows a generally smoother, generally flat 99.9th percentile trend as the user count increases while the HDD array shows a spike when the user count reaches 64.

User Count S650DC Array
99.9th Percentile Response Time Advantage
8 70%
16 69%
24 66%
32 62%
40 56%
48 52%
56 49%
64 70%

Table 3: S650DC Latency Consistency Improvement

S650DC: Compelling Economics

The performance that the S650DC demonstrates is not surprising—database workloads like OLTP typically flourish with high-performance SSDs. Where the S650DC really shines is in its cost to perform useful work (for these calculations, ‘useful work’ is defined as executing the measured workload for a fixed amount of time—the more orders per minute, the more useful work done per unit of time.  If that useful work costs less per transaction, that array type shows a better value).

Since the goal of OLTP is new order processing/transactions, we can evaluate the overall value of either array type by comparing the useful work done to the cost of purchasing the array:

Array Value = (Measured Orders per Minute) / (Array Purchase Cost)

The host platforms we used to test each array type were identical in every way except storage—one platform contained four S650DC SSDs and the other platform contained 16 HDDs, as noted earlier. This enables us to normalize their portion of the overall system cost.

Array Cost

At the time of publication, one S650DC 800GB SSD sells for about $1262 and each 300GB SAS HDD sells for about $199*. Because the host platforms are the same, the only variable cost of primary interest is the cost of the storage devices, as noted below.

Storage Device Unit Cost* Number of Devices Array Cost
SAS 15K RPM HDD 300GB $ 199* 16 $3184
S650DC SSD 800GB $1262* 4 $5048

Table 4: Storage Array Cost

*Average of three single unit prices from www.google.com (shopping link) as of time of publication

Storage Economics

Now that we have the cost of each array, we can evaluate the best-case and worst-case economic advantages of the S650DC (measured in cost per new order). Using the above array cost data and best and worst measured orders per minute performance, we can examine both “ends” of the value spectrum by comparing the cost per new order for each array.

S650DC: 15X Lower Array Cost per New Order (Worst-Case Economics)

We saw earlier that the worst-case performance advantage of the S650DC array was measured with 8 users, where the S650DC array was 22X better. For reference, 8-user performance of both the S650DC and HDD arrays is shown in Table 5. Fixing a one-minute time interval gives the number of new orders completed in one minute. We then apply array cost data to determine the array cost per new order:

Array Cost per New Order = Array Cost / Orders per Minute

Storage Device Array Cost Orders per Minute for 8 Users Cost/New Order (48 users)
S650DC SSD 800GB (4) $5048 30,267 $0.17
SAS 15K RPM HDD 300GB (16) $3440 1365 $2.52

Table 5: Worst-Case Economics

With the S650DC, each new order has an array purchase cost of about 17 cents, whereas the HDD array costs about $2.52, making the S650DC array purchase 15X less expensive per new order.

S650DC: 19X Lower Array Cost per New Order (Best-Case Economics)

Looking now at the best-case economic difference, we use the exact same calculations and see that the best case occurs with 48 users, where the S650DC array performance is 29X better. The 48-user performance of both the S650DC and HDD arrays is shown in Table 1. Fixing a one-minute time interval gives the number of new orders completed in one minute, and we use the same formula to calculate array cost per new order. The results are in Table 6.

Storage Device Array Cost Orders per Minute for 8 Users Cost/New Order (48 users)
S650DC SSD 800GB (4) $5048 86,654 $0.06
SAS 15K RPM HDD 300GB (16) $3440 2,988 $1.15

Table 6: Best Case Economics

With the S650DC, each new order has an array purchase cost of about 6 cents, whereas the HDD array costs about $1.15, making the S650DC array purchase 19X less expensive per new order.

An Easy Upgrade

Because the S650DC is a fully validated 12 Gb/s SAS SSD, upgrades are easy. The S650DC uses the same interface, same form factor and the same host connections as legacy HDDs commonly used today.

The Bottom Line

This technical brief shows clear differences in both raw performance and performance consistency between an enterprise SSD array with four Micron S650DC SSDs and a legacy RAID 10 array built with sixteen 15,000 RPM SAS HDDs. Although results can be dependent on software and platform configuration, as well as a litany of other variables, a small array of S650DC SSDs is extremely capable and can offer significantly better performance, responsiveness and value than a much larger legacy HDD array.
Using a PostgreSQL database and an OLTP workload with the tested configurations and user counts, the S650DC array delivered:

  • 29X more orders per minute when configured with 48 users
  • 97% better average response time as well as 52% better response time consistency when configured with 48 and 64 users, respectively


  • Array costs at least 15X, and at best 19X, lower per new order (evaluated and calculated as shown)

The Micron S650DC SAS SSD is fundamentally recharting the value proposition of high-performance storage in highly active database applications like OLTP. The S650DC SAS SSD moves PostgreSQL OLTP workloads forward with HDD-crushing performance and latency plus compelling economics. The S650DC handles the most demanding workloads with ease and ushers them into next-generation data management.

How We Tested

Test Database, Schema, Benchmark Tool

To ensure consistent maximum performance from the PostgreSQL database, some PostgreSQL parameters were modified from their default values (using PgTune), as shown below.

Parameter Value
Default_statistics_target 100
Maintenance_work_mem 2GB
Effective_cache_size 72GB
Work_mem 768MB
Shared_buffers 30GB
Checkpoint_segments 4096
Checkpoint_timeout 5 min
Checkpoint_completion_target 0.8
Seq_page_cost 0.5
Random_page_cost 2.0 (S650DC)
4.0 (HDD)
Bgwriter_delay 15
Bgwriter_lru_maxpages 1000
Effective_io_concurrency 1000 (S650DC)
16 (HDD)

Table 7: PostgreSQL Parameters in Testing

For testing, a warehouse/district/customer/item model was used, composed of an approximately 800GB database (larger than the available memory), resulting in a read-heavy storage I/O workload. For each test, the schema was identical to the tables and row counts shown below.

Table Name Rows
NEW_ORDER 67,500,000
ORDER_LINE 2,250,000,000
CUSTOMER 225,000,000
HISTORY 225,000,000
ITEM 750,000
ORDER 2,250,000,000
STOCK 750,000,000

Table 8: Test Schema

The workload was executed by HammerDB with modifications to the user load script such that the host dataset size was dramatically different than used by the default user load script. This reflects common PostgreSQL deployments.

Platform Configuration

To ensure consistent and comparable results, identical hardware and software platforms and databases were tested.

Note: Database settings and file locations changed as part of the testing. For each configuration option, the test sequence was identical.

Database Server

  • Dell™ PowerEdge™ R730xd
  • 2 Intel® Xeon® E5-2690 v3 12 processors
  • 128GB RAM
  • 16x 300GB SAS HDDs (15,000 RPM): Linux® MDADM RAID 10
  • 4x 800GB Micron S650DC SSD (FW M012)
  • CentOS 7
  • PostgreSQL 9.4

Test Sequence

The test sequence for the S650DC and HDD RAID 10 array was identical. All drives were securely erased (S650DC) or formatted (HDDs). For the HDDs, the RAID 10 array was created using the default options and the XFS file system created (also with the default options), the device mounted, and permissions set for the Postgre user. The PostgreSQL metadata moved to the new volume and PostgreSQL was started.

Any previous database was dropped, the test database loaded, and the test run started. For each test, there was a ramp-up time of 15 minutes followed by performance measurement for 15 minutes, after which the data was collected and plotted.

For test runs with varying user counts, the database was dropped and reloaded to ensure each individual run started from the same point (using the PostgreSQL command):


This command copied the existing template with no changes to the test location.




Modernize your IT infrastructure with our enterprise-hardened, dependable S600DC series SSDs, ensuring uninterrupted delivery of your data.

View S600DC Products



An overview of the features that the S600 Series of SAS SSDs provides for business-critical applications.


An overview of the features that the S600DC series of SAS SSDs for FIPS 140-2 provides for critical applications.


When I was younger, I had friends with fast cars. Kids with fast cars like to see just how fast they are. Now that I have a few (well…many) more years of wisdom, I still appreciate fast – but differently.


An OLTP workload is one of the most demanding in the Enterprise. With OLTP, it seems that everything matters. Order processing rates matter. Average response time matters. Response consistency matters. Moreover, with changing datacenter economics and value metrics, the amount...

Sign up for updates

Get Updates From Micron Storage