Small or large, on-prem or off-prem, cloud deployments are proof that a technology advancement can revolutionize how business is done. For example, what reliable, affordable, high-bandwidth internet did to enable classic enterprise services like remote replication, now flash memory and fast storage are doing to enable many of the benefits of the cloud. Flexibility in all IT operations? Pay-as-you-go platforms with as-a-service deployments? Yes, and yes.
Many enterprises choose public cloud services to avoid absorbing the costs of storage hardware, data centers and the staff to manage it all. But on-premises or co-located cloud (remember, cloud is an operating model and not a deployment location) are still a big part of the mix, and there the timing to integrate flash is under your control. Whether you’re kicking off a PaaS project on your bare-metal co-located cloud, offering virtual machines in an IaaS model, setting up a sandbox in your private cloud for DevOps, or modernizing legacy systems within a hybrid environment, you want the best infrastructure and the right services.
What’s a good time to move from hard disk drives (HDDs) to solid-state drives (SSDs) and persistent flash memory for cloud computing?
When You Need to Scale Up, Out and Back Down Again
Massive enterprise data are already in play. Worldwide data creation may go as high as 163 zettabytes by 2025, says IDC, and your organization’s IT infrastructure will almost certainly have more data to deal with if it doesn’t already.
Flash storage makes it easier and more cost effective to scale without impacting service. Scaling a legacy data center is mostly about horizontal scaling (scaling out) by adding more machines into your pool of resources. Flash storage can do this, while it also enables efficient vertical scaling (scaling up) by adding more power to existing servers. While scaling up by adding more or faster storage to an existing deployment may be simple, traffic might become bottlenecked somewhere else. Scale out allows the simultaneous addition of more resources – compute, network, and storage – and therefore more performance capability. Current scale out flash architectures provide capacity and performance as nodes are added. Often, the system can automatically rebalance data to new nodes to effectively take advantage of resources.
Of course, we focus more on starting small with cloud servers then adding capacity and performance as needed. But scaling down is also a part of cloud deployment lifecycles, often rapidly, like when a DevOps testing project is complete or the peak selling season is over. Enterprises are also investigating containers to better build, scale and manage cloud-native applications. Flash storage makes that more efficient. Example: it’s easier to migrate and redeploy servers for new workloads without worrying about having enough storage performance. Compare that with the complex process of hunting for input-output per second (IOPS) with disk-based solutions.
When You Need Fast, Available Storage
Whether you’re in business as a cloud services provider, or you just want to emulate one on your organization’s private or hybrid cloud platform, there’s an increased demand to make large data sets easily accessible at a moment’s notice for analysis and action. New architectures are needed to support fast-growing cloud analytics initiatives. If you want to build out systems that meet performance and scalability requirements now and position data centers for coming generations, look to flash.
Traditional data management approaches have restricted storage flexibility caused by lack of easy scalability and bottlenecks, often resulting in low data utilization rates – even when high-performance SSDs are integrated. But all-flash storage enables management of many workloads with a single medium. Your storage silos can be eliminated because your cloud infrastructure shares resources with consistent, predictable, high performance, making your IT simpler and more efficient.
How fast is fast storage? From a 2017 Micron blog and tech brief comes an example of 4KiB random read IOPS among three high-performance Micron SSDs (comparing flash on protocols NVMe™, SATA and SAS) against a performance SAS HDD (15,000 RPM). You’ll notice the SAS HDD needs a magnifying glass to even see its IOPS.
When You Need to Be Profitable
The cloud service provider space is highly competitive so profitability is often the overriding goal. But even if you’re managing the on-prem part of your own hybrid cloud deployment, you still pursue cost control measures: showing back or charging back must stay competitive with public cloud offerings. It’s easier to manage multiple dimensions of your infrastructure costs with flash storage’s consistent performance. Innovative memory technology like NVDIMMs, which can provide up to 32GB of DRAM-speed storage each, can be leveraged to provide even higher performance when needed by applications. Centralizing data center management and high-bandwidth virtualized enterprise IT with fast storage and flexible memory helps fully utilize your hardware investment.
New developments in density per SSD, most recently 3D NAND flash technology, enable an enormous reduction in the number of racks required to store the same amount of data, when compared to HDDs. Data architects are leveraging this high-density media to reduce footprint, power and cooling costs, as well maximizing IOPS per watt. A Micron brief shows a 13x reduction of the cost of power to store 50 PB of data on 2011-2013 HDDs vs 2017 high-capacity SSDs, using 2U servers and measured in kW. Near-term innovations like QLC flash are sure to make this delta even more dramatic.
One of the common budget mistakes is focusing on purchase price of SSDs vs HDDs. When indirect costs like power, cooling, and software licenses per node are factored in, the total cost of ownership (TCO) in many cases favors SSDs, as in the three examples in the Micron brief, “What’s Your Data Storage Challenge.” In the bigger picture, flash storage can deliver significantly higher performance than traditional storage with lower power consumption. Combined with consistency of performance and reduction of the concerns of “noisy neighbor” problems, the TCO of flash tells a very strong story.
When You Need to Balance Innovation and Product Longevity
“Design your cloud center of excellence for constant evolution,” stated a recent CloudRumblings feature. With IT accelerating at supersonic speeds, cloud programs must keep up. Rapid technology changes complicate purchasing enterprise or cloud storage hardware. Rotating media is winding down, so more and more enterprises are investing in SSDs.
Choose well. Workload characteristics, IOPS needed, bandwidth, and read/write ratio can direct you to the right flash storage for your use case. And you’ll want to purchase from a trusted vendor whose products have a proven track record in the cloud ecosystem.
We focus on adaptive IT structures. Micron’s consistent flash memory and storage have been proven to help boost workload and application performance for some of the world’s largest and most innovative cloud companies. We draw on our deep relationships with many of the top cloud service companies to stay in sync with the latest cloud development, knowledge that we share through collaboration on solutions. We’re also sensitive to roadmap stability and extended product lifecycles. Micron offers cloud customers a rich portfolio of flash storage solutions for their most demanding deployments—regardless of but always aligned with the environment, application and workload.
Today’s cloud computing environments demand performance, high bandwidth and cost-effective scalability that is well satisfied by flash storage in the cloud architecture or enterprise data center. If you’d like to see more, visit micron.com/solid-state-drives.
Andrew Braverman is worldwide director of field systems engineering for Micron Technology, a global IT infrastructure company based in Boise, Idaho.