Invalid input. Special characters are not supported.
What it really took to compute 314 trillion digits of π
StorageReview’s successful computation of π to 314 trillion digits set a new world record, but the goal was never symbolic. It was a deliberately extreme workload meant to push modern server storage to its limits and answer a practical question: can a single system sustain petabyte‑scale I/O continuously for months?
After more than 110 days of nonstop computation on a single Dell™ PowerEdge™ R7725, the answer was affirmative, provided the storage architecture was built for sustained performance and consistency, not just short bursts.
At a glance
- An I/O marathon: sustained mixed read/write pressure for more than three months
- More than 2.1 PB of usable flash capacity inside a single server
- Takeaway applies to long-running HPC and AI jobs: consistency protects time-to-results
To configure the storage architecture required, StorageReview built a system with 40 Micron 6550 ION SSDs, each with a usable capacity of 60TB in the E3.S form factor. Understanding the “why” behind the scale of the storage—both in drive count and total capacity—is essential to understanding what this record actually demonstrates.
Why this job needed over two petabytes of flash
Computing π at this scale is not about storing the final answer. The output itself is small relative to the working data required to get there.
At 314 trillion digits, y-cruncher—the application used for the record—requires enormous scratch space to support:
- Large temporary arrays for FFT-intensive math operations
- Frequent full state checkpoints to protect weeks of progress
- Validation data to ensure correctness during a monthslong run
- Multi-precision intermediate values used during computation
To meet these requirements, StorageReview provisioned more than 2.1 PB of usable flash capacity in the system.
- 34 of the 40 Micron SSDs allocated to scratch space for y-cruncher, forming the high bandwidth working tier
- Remaining six SSDs stored the final π result in a RAID10 configuration
At peak, the workload consumed up to 1.43 PiB of storage simultaneously, with individual checkpoints reaching hundreds of terabytes. This capacity was not overprovisioned; it was required to complete the computation safely and efficiently.
Sustained I/O characteristics of the π computation
This was not a short benchmark run intended to showcase peak performance. The π computation applied continuous pressure on storage for more than three months, with no practical opportunity for downtime or recovery.
The workload exhibited characteristics common to advanced HPC and AI environments:
- Sustained high bandwidth read and write operations
- Continuous heavy write activity over long durations
- Predictable performance requirements with minimal tolerance for latency spikes
- Operational risk, where a storage failure could invalidate weeks of work
Over the course of the run, the system remained online continuously and never had to resume from any failure.
This matters because many production workloads fail not due to lack of peak performance, but due to instability or inconsistency over time. Long-running jobs amplify small weaknesses in the storage stack.
High-density NVMe™ in a single server architecture
Historically, workloads with these characteristics would have pushed teams toward distributed storage systems or multi-node clusters to achieve sufficient capacity and aggregate I/O.
Instead, StorageReview completed the entire computation within a single server chassis.
By deploying 40 high-capacity NVMe SSDs in one Dell™ PowerEdge™ R7725, the system delivered:
- Petabyte-scale capacity without external storage arrays
- Aggregate bandwidth to sustain compute for months
- A simplified operational model with fewer components and failure domains
| Component | Specification |
| Server platform | Dell™ PowerEdge™ R7725 |
| Processor | Dual AMD EPYC™ processors |
| System memory | High‑capacity DDR5 memory (multi‑terabyte class) |
| Storage drives | 40 × Micron 6550 ION NVMe SSDs |
| Total raw flash capacity | >2.4 PB |
| Usable flash capacity | ~2.1 PB |
| Scratch storage allocation | 34 SSDs dedicated to y‑cruncher working data |
| Result storage | 6 SSDs in RAID 10 for final π output |
| Storage interface | PCIe® Gen5 NVMe |
| Operating system | Linux® |
| Application | y‑cruncher (high‑precision mathematics) |
The takeaway is not that every workload requires dozens of drives in one server. Rather, the result highlights how modern high-density NVMe storage changes the architectural trade-offs. Workloads that once demanded scale-out complexity can now, in some cases, be addressed with scale-up designs.
Relevance to modern HPC and AI workloads
While the workload was unusual, the storage behavior observed during the run closely mirrors demands seen in production environments, including:
- Large-scale AI training, where terabyte-scale checkpointing is frequent, and storage performance directly impacts training time
- Inference pipelines and feature stores, where predictable latency matters more than peak throughput
- Scientific simulations and modeling, where jobs run for weeks or months and restart costs are prohibitive
- Advanced analytics pipelines, where large working datasets must stay close to compute
In each of these cases, storage consistency and endurance over time directly impact job completion, system utilization, and operational risk.
Key technical takeaways from the record
This record was not about setting a mathematical milestone alone. It demonstrated several practical realities of modern storage-centric compute:
- Petabyte-scale scratch workloads can be supported entirely on NVMe
- High-capacity SSDs can sustain extreme I/O pressure without performance collapse
- Single-node architectures can now handle workloads once reserved for clusters
- Performance consistency and endurance are as important as raw bandwidth
These findings reflect how storage increasingly determines the feasibility and efficiency of advanced compute workloads.
Implications for data center strategy and infrastructure planning
Beyond setting a technical milestone, this work highlights how storage increasingly shapes both operational outcomes and architectural choices in modern data centers.
For business and IT leaders, the most important takeaway is not peak throughput, but predictable performance at scale. Long-running workloads — whether in AI training, large-scale analytics or scientific computing — amplify inefficiencies and failures. When storage becomes a bottleneck, expensive compute resources sit idle, costs escalate and delivery timelines extend.
This record illustrates that high-capacity NVMe can shift that balance by keeping data consistently available to compute over extended periods, reducing variability and operational risk.
Considerations when planning infrastructure upgrades
As teams plan upgrades for AI and other data-intensive workloads, several evaluation criteria become increasingly important:
- Sustained throughput rather than burst performance
Short benchmarks rarely reflect real workloads. Months-long consistency under mixed read/write pressure matters more than peak numbers achieved in minutes. - Performance density per server
The ability to consolidate petabyte-scale capacity and I/O into a single system has implications for power, space, networking and management overhead. - Latency predictability and tail behavior
Average performance tells only part of the story. Latency outliers can stall pipelines, delay checkpoints and cascade into job failures. - Endurance and reliability under steady load
Long-running jobs expose weaknesses that do not appear during short tests. Storage must maintain performance and data integrity as utilization approaches steady state. - Operational simplicity
Reducing dependence on external storage fabrics or large clusters can shrink the blast radius of failures and simplify deployment and scaling.
Aligning storage choices with data center strategy
One of the broader lessons from this record is how modern NVMe storage enables rethinking where complexity belongs. In some scenarios, scaling up with higher storage density in fewer nodes can replace the need to scale out. This can lead to:
- Fewer servers and interconnects
- Lower power and cooling demands per unit of work
- Simplified automation and lifecycle management
- Faster deployment and recovery times
This does not eliminate the need for distributed architecture, but it expands the set of practical design options available to infrastructure teams.
As AI and analytics workloads continue to grow in size and duration, storage decisions will increasingly influence not just performance, but cost efficiency, resilience and organizational velocity.
The bottom line
Computing 314 trillion digits of π left no room for margin of error. The system operated under constant load for more than 110 days, which should have exposed any weakness in performance, endurance, or reliability.
None surfaced.
Instead, the result demonstrated that high-capacity Micron NVMe SSDs can deliver sustained performance, operational stability and performance density at a level that meaningfully changes infrastructure design choices.
The lesson is not about π. It is about what is possible when storage is designed to support very large, long-running, data-intensive workloads without surprises. To learn more from our storage experts, please visit our Data Center Insights page.