- StorageReview’s physical server calculated 314 trillion digits without a distributed cloud infrastructure
- The entire calculation ran continuously for 110 days without interruption
- Power consumption dropped dramatically compared to previous cluster-based Pi records
A new benchmark in large-scale numerical computation has been set with the calculation of 314 trillion digits of pi on a single local system.
The run was capped by StorageReview, which surpassed previous cloud-based efforts, including Google Cloud’s 100 trillion figure calculation by 2022.
Unlike hyperscale approaches that relied on massive distributed resources, this record was achieved on one physical server using tightly controlled hardware and software choices.
Runtime and system stability
The calculation ran continuously for 110 days, significantly shorter than the roughly 225 days required by the previous large-scale record, although the previous effort yielded fewer digits.
The uninterrupted execution was attributed to the stability of the operating system and limited background activity
It also depends on balanced NUMA topology and careful memory and storage tuning designed to match the behavior of the y-cruncher application.
The workload was treated less as a demonstration and more as a prolonged stress test of manufacturing quality systems.
At the center of the effort was a Dell PowerEdge R7725 system equipped with two AMD EPYC 9965 processors, providing 384 CPU cores along with 1.5 TB of DDR5 memory.
The storage consisted of forty 61.44 TB Micron 6550 Ion NVMe drives, providing approx. 2.1 PB raw capacity.
Thirty-four of these drives were allocated to y-cruncher scratch space in a JBOD layout, while the remaining drives formed a software RAID volume to protect the final output.
This configuration prioritized throughput and power efficiency over full data resiliency during computation.
The numerical workload generated significant disk activity, including approx. 132 PB logical reads and 112 PB logical writes during the run.
Maximum logical disk usage reached around 1.43 PiB, while the largest checkpoint exceeded 774 TiB.
SSD wear metrics reported about 7.3 PB written per drive, totaling about 249 PB across the swap drives.
Internal benchmarks showed sequential read and write performance more than doubled compared to the previous 202 trillion digit platform.
For this setup, power consumption was reported at around 1,600 watts, with a total energy consumption of approximately 4,305 kWh or 13.70 kWh per hour. trillion digits calculated.
This number is far lower than estimates for the previous 300 trillion digit cluster-based record, which reportedly consumed over 33,000 kWh.
The result suggests that for certain workloads, carefully tuned servers and workstations can outperform cloud infrastructure in efficiency.
However, this assessment applies narrowly to this class of calculations and does not automatically include all scientific or commercial use cases.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



