Very Large Data Should Be More Than An Archive
When we scale a data set – either locally or in the cloud, through expanded sources, deeper analytics, online/real-time results, medical trials and research and a host of other sources – performance is imperative. Without massive performance, a massive-scale data set is little more than an archive. The data can just sit there. In storage. Doing nothing useful.
Small-scale data sets are easy. Small enough to fit in available memory? Performance is straight forward – load it into memory and go. Storage system capability is less important.
Immense data sets are harder. As our data sets grow, a dwindling percentage of it affordably fits into memory, generating a troubling trend (Figure 1).
Combining immense data with our constant demand for faster and more detailed analytics drives us to do more. We need to add fast, high-capacity storage to the mix. Apache Cassandra™ combined with NVMe SSDs can help. When our data set is too large to fit into memory, fast storage is imperative.
Cassandra’s ability to support massive scale combined with multi-terabyte, high IOPS NVMe SSDs builds high capacity NoSQL platforms offering extreme agility and extreme capability.
Transform Data Into A Strategic Asset
High capacity, high performance SSDs with NVMe can produce amazing results with Cassandra. When you scale your local or cloud based Apache Cassandra deployment, you can get more out of it with NVMe SSDs.
Want to learn more?
Take a look at our complete lineup of SSDs with NVMe (be sure to look at the MAX, PRO and MAX families) and download the 9200 SSD Product Brief. Already convinced? Need to convince others? Download “The Business Case for NVMe SSDs” to help make your case.
If you have questions about the testing we ran, tweet me @GreyHairStorage or connect with Micron on Twitter @MicronStorage and on LinkedIn.
About Our Blogger
Doug is a Senior Technical Marketing Engineer for Micron's Storage Business Unit, with a focus on enterprise solid state drives.