logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Microchip background

Optimizing Database Workloads – an SSD I/O Perspective

Optimizing Database Workloads – an SSD I/O Perspective

As follow-on to my June post about Understanding Effective Workloads, I wanted to spend some time focusing on the critical design and operational characteristics specific to database workload activities.  When I/O workload characteristics are well understood, it is easier to choose the right solution for delivering outstanding overall database performance.

In my previous article, I explained how the read/write ratio of a client request workload changes as the workload passes from the SQL server through the OS to the block device driver within the Linux kernel, and how the resulting effective workload on an SSD can be counter-intuitive. Unraveling this mystery requires exploration of database design, its application, and careful monitoring of I/O data flow within the OS kernel. It’s a bit like peeling an onion, and it’s critical in maximizing IOPS in the data center space.

Database workload control can be tricky. With every design, configuration, and operational change, the dynamics of in-kernel caching may alter the read/write ratio of I/O operations at the disk drive interface, causing them to radically shift. In my June post we saw how the read/write ratio of storage subsystem transactions generated by a TPC-C workload depends on how the workload is processed within the SQL server, how it egresses into system calls, and ultimately how it emerges from the block driver. A prototypical 70% read/30% write workload may actually change into a 10% read/90% write I/O load at the storage block driver interface.

In an SQL server, each incoming transaction (or operation) involves many actions. It is helpful to understand these actions to gain insight into the resulting operations that take place as the SQL server makes system calls, processing the resulting I/O transactions into the storage subsystem:

Database workload operations generate:

  • Transaction logs
  • Index updates
  • Data table transactions

Database administrators (DBAs) make a fine science out of placement of the transaction logs, and the separation, aggregation, and location of data tables on separate or particular data volumes within the fabric of the storage subsystems.

A single logical I/O operation is called a transaction. Database systems incorporate specific transaction handling mechanisms to assure the integrity and reliability of each transaction. The acronym ACID describes the protective measures that govern the reliability of each transaction. ACID signifies: Atomicity (if one part of a transaction fails, the resulting state of the entire data system is left unchanged); consistency (all data that is written to the database must conform with all rules, constraints, cascades, and triggers that apply to the transaction); isolation (concurrent execution of I/O transactions occur to result in a state identical to that obtained when the transactions are executed serially); durability (a transaction that has been committed is durable, even in the event of a power loss, system crash, or other errors during processing).

When selecting media for the storage subsystem—such as an SSD—it is helpful to take into consideration the differing characteristics of the various types of data stores used by an SQL server. Transaction logs, index tables, and data tables have significantly different characteristics and have the most impact on read/write operations in the storage fabric.

To analyze these data store characteristics, we measured a TPC-C OLTP workload against a PostgreSQL server running on a CentOS 7 platform. The read/write block sizes shown below represent the median sizes of block operations. (Detailed consideration of the distribution of block operation sizes is a complex undertaking and beyond the scope of this blog post.)

In-memory caching of read operations within the OS can significantly change the ratio of the read/write operations. However, it should be noted that the extent to which commit-to-storage (synchronous write) operations are executed will additionally impact the resulting read/write ratio; therefore, the addition of potential cache memory is not guaranteed to reduce read operations. Consequently, for each workload there will be an optimum system memory configuration and an optimum storage system configuration. It has been our experience that it takes great patience and vigilance to tune the SQL server to find its optimum storage system configuration for each workload. This demonstrates how read/write operations may impact storage subsystems, and how this can determine what type of SSD may be best for the SQL data store. Optimization of overall application and server configuration is clearly important to assure the maximum return on SSD-based storage systems investment.

Micron is interested in finding signature data center workloads to include in our performance optimization studies, and I’m interested in hearing your workload experiences. If you have an interesting workload that you would like to share, connect with us on LinkedIn or Twitter @MicronStorage. I look forward to hearing about your workload challenges.

About Our Blogger

no author

John is Director of Storage Solutions Engineering.

Login or Sign Up Now for an account to leave a comment.