logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Microchip background

How to give your big data the big performance it requires

How to give your big data the big performance it requires

We have a problem. A data problem. We have too much of it. Estimates are 163 zettabytes of data created by 2025. Zettabytes. Whoa. Data is not going anywhere, so now we are faced with the challenge of rethinking how we store it, manage it and deliver it.

Between private clouds, big data, real-time sensors, self-monitoring and self-reporting devices we’re (figuratively) drowning in data. Add ever-changing archive and retrieval requirements and we have a problem. A real problem. We are generating, capturing and managing new data from new sources with immense volume at unprecedented rates. Our virtualized environments, media streaming services, cloud-based infrastructures and distributed workforce want more from that data. Now.

Start With Red Hat Ceph Storage

Red Hat Ceph Storage Solutions can help lower acquisition costs and drive better results for massive-scale active archives, content repositories, OpenStack™ cloud storage and content distribution platforms.

Add Our 9100 MAX NVMe SSDs

With the speed and endurance required for today’s massive scale computing environments, our 9100 MAX NVMe™ SSD brings data processing closer to minime latency and providing consistently fast throughput.

Get Amazing Results: Ceph Plus 9100 MAX

We built a Ceph cluster using our 9100 MAX NVMe SSD and standard, off the shelf 1U servers.  We used four OSD nodes and three monitor nodes (no NVMe in the monitor nodes!) and unleashed some tough workloads and benchmarks on two, four and 10 9100 MAX per OSD node configs.

We tested 4KB, random IO and 4MB object IO as we scaled each node from two to four to 10 9100 MAX SSDs per node. Were we pleased with the results? Well, yes. Yes, we were.

More than 1.1 million read IOPS 

More than 1.1 million read IOPS

Virtualized environments are very demanding, and their highly random, small I/O size storage profile can be very difficult — legacy storage platforms have a hard time keeping up. When we combined our 9100 MAX with Ceph, we got more than a million 4KB read IOPS with 10 SSDs per node (4 OSD nodes).

More than 21 GB/s 4MB object read

More than 21 GB/s 4MB object read

For some Ceph implementations, 4MB object IO is more important.  We used RADOS bench to push our 9100 MAX based Ceph configuration…hard. With 10 SSD per node (again, only 4 OSD nodes) we measured just over 21 GB/s (at only 36ms average latency).

Build for tomorrow with the 9100 MAX and Ceph

Data problems are getting worse, not better. If today is tough, tomorrow will probably be tougher.

NVMe SSDs like our 9100 MAX and Ceph may help.  An all-NVMe Ceph configuration with our 9100 MAX U.2 NVMe SSDs enables phenomenal IOPS and throughput as well as granular scale out/scale up to help you deploy a Ceph cluster to meet your needs.

For all the details and calculations, I’d encourage you to read the full Technical Brief.  Also, take a look at our 9100 NVMe family.  Have you found new ways to use NVMe? I’d like to hear about it.  Tweet me @GreyHairStorage or our main storage handle @MicronStorage. We also like e-mail SSD@micron.com.

About Our Blogger

Doug Rollins Doug is a Senior Technical Marketing Engineer for Micron's Storage Business Unit, with a focus on enterprise solid state drives.
Login or Sign Up Now for an account to leave a comment.