logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Boost Ceph Block Performance with RHEL 7.5, Ceph Luminous 12.2.5, and the Micron 9200 MAX NVMe SSD

Usually a point release in an OS or storage solution is no big deal, but I tested Red Hat Enterprise Linux 7.5 and Ceph Luminous 12.2.5 and found a surprising improvement in block performance.

Boost Ceph Block Performance with RHEL 7.5, Ceph Luminous 12.2.5, and the Micron 9200 MAX NVMe SSD

Hi everybody,

Usually a point release in an OS or storage solution is no big deal, but this time is different. I tested Red Hat Enterprise Linux 7.5 and Ceph Luminous 12.2.5, both point releases from my previous blog on Bluestore vs. Filestore Performance, and found a surprising improvement in block performance.

4KB random write IOPS performance increases by 12%, average latency decreases by 10%, and 99.99% tail latency decreases by 24%.

4KB random read IOPS and average latency are similar, and 99.99% tail latency decreases by 20% to 43%.

Boost

This solution is optimized for block performance. Random small block testing using the Rados Block Driver in Linux saturates platinum-level 8168 Intel Purley processors in a 2-socket storage node.

With 4 storage nodes and 10 drives per storage node, this architecture has a usable storage capacity of 232TB that can be scaled out by adding additional 1U storage nodes.

Reference Design – Hardware

Boost

Test Results and Analysis

Ceph Test Methodology

Ceph Luminous (12.2.4 & 12.2.5) is configured with Bluestore with 2 OSDs per Micron 9200 MAX NVMe SSD. RocksDB and WAL data are stored on the same partition as data.

There are 10 drives per storage node and 2 OSDs per drive, 80 total OSDs with 232TB of usable capacity.

The Ceph storage pool tested was created with 8192 placement groups and 2x replication. Performance is tested with 100 RBD images at 75GB each, providing 7.5TB of data on a 2x replicated pool, 15TB of total data.

4KB random block performance was measured using FIO against the Rados Block Driver. We used 10 load generation servers (Dual-CPU Xeons w/ 50GbE networking) and ran multiple FIO processes per load generation server. Each FIO process accessed a unique RBD image and FIO processes were distributed evenly across the 10 load generation servers. For example, the 100 FIO clients test used 10 FIO processes per load generation server.

We are CPU limited in all tests, even with 2x Intel 8168 CPUs per storage node. All tests were run 3 times for 10 minutes with a 5-minute ramp up per test.

RBD FIO 4KB Random Write Performance: RHEL 7.4 + Ceph 12.2.4 vs. RHEL 7.5 + Ceph 12.2.5

Boost

RHEL 7.5 + Ceph Luminous 12.2.5 provides a 12% increase in IOPS and a 10% decrease in average latency.

Boost

Tail latency is improved with RHEL 7.5 and Ceph Luminous 12.2.5, decreasing by 25% at 100 FIO clients.

RBD FIO 4KB Random Read Performance: RHEL 7.4 + Ceph 12.2.4 vs. RHEL 7.5 + Ceph 12.2.5

Boost

4KB random read performance is similar between RHEL 7.4 + Ceph Luminous 12.2.4 and RHEL 7.5 + Ceph Luminous 12.2.5. There’s a slight increase in IOPS with a maximum of 2.23 Million IOPs.

Boost

Tail latency is improved with RHEL 7.5 and Ceph Luminous 12.2.5, decreasing by 43% at queue depth 16 and 23% at queue depth 32.

Would You Like to Know More?

Ceph + the Micron 9200 MAX NVMe SSD on the Intel Purley platform is super fast. The latest reference architecture for Micron Accelerated Ceph Storage Solutions is available now. I presented details about the reference architecture and other Ceph tuning and performance topics during my session at OpenStack Summit 2018. A recording of my session is available here.

If you like Ceph, check out my other articles on the Micron Storage Blog.

Have additional questions about our testing or methodology? Leave a comment below or you can email us ssd@micron.com.

About Our Blogger

Ryan Baxter

Ryan Meredith is a Principal Storage Solutions Engineer at Micron. He's worked in enterprise storage since 2007 for US Bank, IBM, and Gemalto. His current focus is architecting Ceph storage solutions using Micron's DRAM and NVMe / SSD / 3D XPoint technologies. He likes dogs, games, travel, and scuba diving.

Ryan has a Master of Science degree in Management Information Systems from the University of South Florida.

Login or Sign Up Now for an account to leave a comment.