Understanding Effective Workloads

By John Terpstra - 2015-06-30

In April 2015 Steve Moyer, Vice President of Storage Software Engineering for Micron, announced the establishment of Micron’s new Storage Software Design Center (SSDC) in Austin, Texas. With the goal of accelerating real-world business applications for Micron enterprise-class SSDs, the focus is on more deeply characterizing workloads so that design engineering teams can develop firmware and controllers that are even better able to deliver performance benefits to Micron’s customers.

The workload generated by critical business applications in modern data centers is more complex than many may realize.  As a result, replacement of slow storage devices like traditional hard disks with solid state drives (SSDs) may not yield the performance increase expected unless the total system is considered holistically.

The SSDC’s Application Acceleration Laboratory takes a system-level approach in developing:

  • Solution-level reference architectures
  • SSD deployment and tuning guides
  • SSD performance characterizations that facilitate storage solution selection and sizing
  • Optimized storage subsystem designs that make use of high-performance SSDs

To this end, repetitive complex test cycles are run using industry-standard business applications in appropriate test harnesses. Workload performance measurement and tuning requires a systematic approach in the design of the test harnesses and test methodologies that are used. Obvious considerations for designing test harnesses include:

  • Elimination of unnecessary variables that may impact the measurement process
  • Use of industry-standard hardware and software configurations
  • Use of standardized and reproducible workload generation tools and methods

The interaction of workload generation, applications under test, and operating system pathways that route the input and output (I/O) requests to storage systems can be complex.  What follows is a brief example that highlights the degree to which a simple increase of system memory can radically change the effective workload for storage systems.

The TPC Benchmark C is an online transaction processing (OLTP) generation standard that has been a used to compare the performance of various hardware and software configurations. We used the HammerDB tool to drive a TPC-C-like OLTP workload against PostgreSQL 9.4 running on CentOS 7.0.  The system calls (measured using strace) issued by PostgreSQL over the duration of the OLTP test run are shown here:

These system calls approximate to 79.1% read, 16.4% write, and 4.5% synchronous write.

The PostgreSQL database used 110GB SSD storage capacity. The following table shows the impact at the SSD of changing system memory, as measured using blktrace:

System Memory












This test demonstrates the effects of file I/O buffering within the operating system and the considerable impact it can have on storage subsystem and disk device operations. Optimization of the overall application and server configuration is clearly important; otherwise, the adoption of faster storage technology may not yield the expected return on investment.

Micron is interested in finding signature data center workloads to include in our performance optimization studies. If you have an interesting workload that you would like to share, please connect with us on LinkedIn or send us a tweet @MicronStorage.  I look forward to hearing about your workload challenges.

John Terpstra is Director of Storage Solutions Engineering for Micron Technology. He leads the Application Acceleration Engineering Team, which enables customers to obtain the best value from software solutions by leveraging Micron storage products to address the needs of virtualization, big data, database management, and cloud-based IT. You can follow John on Twitter @JohnHTerpstra and find him on LinkedIn.

Follow us on Twitter @MicronStorage where we share insights and news related to the data storage industry. 

John Terpstra

John is Director of Storage Solutions Engineering.