DESIGN TOOLS
SSD

Why the performance of your storage system matters for AI workloads

Currie Munce | June 2024

Why the performance of your storage system matters for AI workloads?

A guide to understanding some key factors that influence the speed and efficiency of your data storage

Data is the lifeblood of any modern business, and how you store, access and manage it can make a dramatic difference in your productivity, profitability and competitiveness. The emergence of artificial intelligence (AI) is transforming every industry and forcing businesses to re-evaluate how they can use data to accelerate innovation and growth. However, AI training and inferencing pose unique challenges for data management and storage, as they require massive amounts of data, high performance, scalability and availability.

Not all storage systems are created equal, and many factors that can affect their performance. In this blog post, we will explore some of the main factors that influence storage system performance for AI and, importantly, how your choice of underlying storage media will affect them.

Key attributes of AI workloads

AI workloads are data-intensive and compute-intensive, meaning that they need to process large volumes of data at high speed and with low latency. Storage plays a vital role in enabling AI workloads to access, ingest, process and store data efficiently and effectively. Some key attributes of typical AI workloads that affect storage requirements are:

  • Data variety: AI workloads need to access data from multiple sources and formats, such as structured, unstructured or semi-structured data, and from various locations, such as on-premises, cloud or edge. Storage solutions need to provide fast and reliable data access and movement across different environments and platforms.
  • Data velocity: AI workloads need to process data in real-time or near-real-time. Storage solutions need to deliver high throughput, low latency and consistent performance for data ingestion, processing and analysis.
  • Data volume: As AI models grow in complexity and accuracy and GPU clusters grow in compute power, their storage solutions need to provide flexible and scalable capacity and performance.
  • Data reliability and availability: AI workloads need to ensure data integrity, security and extremely high availability, particularly when connected to large GPU clusters that are intolerant of interruptions in data access. 

Factors that affect storage system performance

Storage system performance is not a single metric but a combination of several factors that depend on the characteristics and requirements of your data, applications and data center infrastructure. Some of the most crucial factors are:

  • Throughput: The rate at which your storage system can transfer data to and from the network or the host. Higher throughput can improve performance by increasing the bandwidth and reducing the congestion and bottlenecks of your data flow. The throughput is usually limited by either the network bandwidth or the speed of the storage media.
  • Latency: The time it takes for your storage system to respond to a read or write request. A lower latency can improve performance by reducing GPU idle time and improving the system’s responsiveness to user inputs. The latency of mechanical devices (such as HDDs) is inherently much higher than for solid-state devices (SSDs).
  • Scalability: The ability of your storage system to adapt to changes in data volume, velocity and variety. High scalability is key to enabling your storage system to grow and evolve with your business needs and goals. The biggest challenge to increasing the amount of data that your system can store and manage is maintaining performance scaling without hitting bottlenecks or storage device limitations. 
  • Resiliency: The ability of your storage system to maintain data integrity and availability in the event of failures, errors or disasters. Higher reliability can improve performance by reducing the frequency and impact of data corruption, loss and recovery.

Storage media alternatives

Hard disk drives (HDDs) and solid-state drives (SSDs) are the two main types of devices employed for persistent storage in data center applications. HDDs are mechanical devices that use rotating disk platters with a magnetic coating to store data, while SSDs use solid-state flash memory chips to store data. HDDs have been the dominant storage devices for decades. HDDs offer the lowest cost per bit and long-term, power-off durability, but they are slower and less reliable than SSDs. SSDs offer higher throughputs, lower latencies, higher reliability and denser packaging options. 

As technology advances and computing demands increase, the mechanical nature of the HDD may not allow it to keep pace in performance. There are a few options that system designs can deploy to extend the effective performance of HDD-based storage systems, such as mixing hot and cold data (hot data borrowing performance from the colder data), sharing data across many HDD spindles in parallel (increasing throughput but not improving latency), overprovisioning HDD capacity (in essence provisioning for IO and not capacity), and adding SSD caching layers for latency outliers (see recent blog by Steve Wells HDDs and SSDs. What are the right questions? | Micron Technology Inc. ). These system-level solutions have limited scalability before their cost becomes prohibitive. How extendable these solutions are is dependent on the level of performance an application requires. For many of today’s AI workloads, HDD-based systems are falling short on scalability of performance and power efficiency.

High-capacity, SSD-based storage systems, though, can provide a less complex and more extendable solution, and they are rapidly evolving as the storage media of choice for high-performance AI data lakes at many large GPU-centric data centers. While at the drive level, on a cost-per-bit basis, these SSDs are more expensive than HDDs. But at a system level, systems built with these SSDs can have better operating costs than HDDs when you consider these improvements:

  • Much higher throughput
  • Greater than 100 times lower latency
  • Fewer servers and racks per petabyte needed
  • Better reliability with longer useful lifetimes
  • Better energy efficiency for a given level of performance 

The capacity of SSDs is expected to grow to over 120TB in the next few years. As their capacities grow and the pricing gap between SSDs and HDDs narrows, these SSDs can become attractive alternatives for other workloads that demand higher than average performance or need much lower latency on large data sets, such as video editing and medical imaging diagnostics. 

Conclusion

Storage performance is an important design criterion for systems running AI workloads. It affects system performance, scalability, data availability and overall system cost and power requirements. Therefore, it’s important that you understand the features and benefits of different storage options and select the best storage solution for your AI needs. By choosing the right storage solution, you can optimize your AI workloads and achieve your AI goals.

Currie Munce is a Senior Technology Advisor and Strategist for Micron’s Storage Business helping to define storage architecture and technology directions for the company.

Currie Munce