Greetings to all – hope you are well – thanks for reading what is my final post from the home lab on this topic. In my last post, I described my findings running 32KB I/Os using the Micron P420m PCIe SSD. Now, to complete the series of posts, I’d like to show you my experience with an application – Microsoft Exchange – as benchmarked by using the popular and well-known Jetstress code.
My lab setup is outlined in part 1 of this blog series and described further in part 2. The first results I’ll show here are my findings running Jetstress on the P420M; after that, I’ll show the same workload running on another SSD in the same server. The purpose of my exercise here not to knock the competition, but to establish to what degree using a PCI-E SSD is superior to using a SATA SSD for enterprise workloads, such as Exchange as embodied by Jetstress. Even though it’s true the PCI-E SSD is more expensive than the SATA SSD, on a $/GB basis, the performance advantage gained far outweighs the cost – after all, it’s really price/performance that counts in enterprise applications, not raw price or raw performance by themselves.
Note, as with my other tests, the adapter queue depth was 255 and the P420M was fully preconditioned (24 hours’ worth) to ensure steady state behavior. Also, as in previous tests, I issued the following command to set the device queue depth to 255:
esxcli storage core device set –m 255 –O 255 –d <P420m device name>
If you’d like more information about Jetstress, here’s a handy link for you, direct to the Microsoft download area and reference documents.
For those of you that are familiar with Jetstress, I ran a very straightforward test – four databases with four logs, with the four databases on a letter drive and the four logs on another letter drive. The entire virtual Windows 2008 R2 system was persistent on the P420M. This means every I/O the OS generates, whether on behalf of itself or on behalf of Jetstress, is handled by the P420M. The test consumed 120GB of the 700GB available on the drive, and ran for 2.5 hours. Jetstress was configured to use auto-tuning in an effort to find the optimal I/O loading for these databases with their logs. As Jetstress begins to execute, it tries to hit a target IOPS figure at a threshold latency, and constructs trials which run for roughly 2.5 minutes each with a certain number of sessions. As the device is capable – keeping latency below a threshold – Jetstress increases the number of sessions it executes. If the latency threshold is exceeded, Jetstress reduces the number of sessions. When it finds the ‘sweet spot’ of the number of sessions, Jetstress then executes that number of sessions, using the given action profile, for two hours, and then reports on its performance characteristics. The action profile used was the Jetstress default - which is 40% insert, 20% delete, 5% replace, 35% read for the transactions, with 70% lazy commit (implying 30% non-lazy commit), background database maintenance running, and 1 copy per database.
Here’s a table, showing my findings as Jetstress executed on the P420M to find the optimal number of sessions at a target IOPS.
||Average Latency (ms)
After Jetstress hit 150 sessions, it bounced between 150 and 149 for four more trials, then settled on 150 sessions for the two-hour run, using a target IOPS of 14,700. When it finished, per database, it recorded 1,951 database reads/sec @ 33KB average size, 1,463 database writes/sec @ 34K, 1 log read/sec @ 2.5ms @ 4K, and 275 log writes/sec @ 11K for a combined IOPS workload of 14,760 – slightly exceeding the target of 14,700.
OK – that trial completed, I used vCenter to migrate (storage move only; no compute move) the VM I was using, persisted by the P420M, to another SSD in my host – a Sandisk SDSSDHII240G. This is a popular SATA SSD of 240GB capacity. I ran the exact same workload – the VM itself running in its entirety on the SSD, including the 120GB capacity used by Jetstress for the four databases and logs on the two letter drives.
Here’s the table, showing my findings as Jetstress executed on the SATA SSD to find the optimal number of sessions at a target IOPS. Compare the findings to the previous table.
||Average Latency (ms)
After Jetstress did the six trials above, it chose 1 session for the two-hour run, using a target IOPS of 810. When it finished, per database, it recorded 104 database reads/sec @ 33KB average size, 74 database writes/sec @ 38K, 0.1 log read/sec @ 4K, and 57 log writes/sec @ 5.7K for a combined IOPS workload of 940 –exceeding the target of 810.
The findings, when compared, show the enormous difference between running complete workloads – OS and application together in a VM – on the P420M PCI-E SSD versus a SATA SSD, at constant latency. This means, to the individual using their mailbox, they’d see similar response time using either device – but the PCI-E device can sustain 150x the mail database sessions! Much more efficient, much more ‘bang for the buck’. This gives you an idea of the power – running efficient, virtualized, enterprise workloads – in selecting the Micron P420M PCI-E SSD.
To summarize, at the threshold latency, the PCI-E drive executed Jetstress with 150 sessions @ 14,760 IOPS while the SATA SSD only sustained 1 session @ 940 IOPS. This is the effect of device selection. As they say, choose wisely!
Finally, as usual, let us know what you think. Send us a tweet @MicronStorage or me directly @peglarr.