- Distributed microdata centers convert unused electricity into working AI computers
- The network targets 400,000 GPUs deployed across 1,000 modular sites globally
- Energy-first deployment avoids delays caused by slow network connection approvals
AI infrastructure is running into a hard limit that has little to do with chips and everything to do with power. New data centers are often ready to build but wait years for permission to connect to already overcrowded power grids.
That delay has created interest in building data centers where power is available rather than expanding the grid to reach them.
French AI infrastructure company Antimatter is rolling out a network of 1,000 modular microdata centers located directly next to energy sources across the US, Europe and the GCC regions.
The article continues below
1GW capacity secured through grid connection
These smaller facilities use electricity that existing grid connections can’t carry to customers and run AI workloads on site instead of waiting years for new transmission lines to be built.
Each device fits inside container-style modules that hold up to 400 GPUs and can be deployed in about five months.
Traditional hyperscale builds often require more than two years to reach equivalent readiness.
Wind, solar, hydro and biogas plants are the main targets because many already produce electricity that cannot always be delivered to customers when transmission capacity is limited.
By locating data centers adjacent to these sites, power that would otherwise be limited can be used for processing instead.
Antimatter says more than 1 GW of capacity has been secured through grid connection agreements and reserved locations, with over 160 MW already operating in Texas and Oregon.
Ten units across eight locations form the early footprint, with hundreds more installations in development.
The first major build phase centers on 100 deployments planned for 2027, supporting more than 40,000 GPUs and about 3.6 exaFLOPS of computing capacity.
Long-term plans extend to 1,000 sites by the end of 2030, delivering more than 400,000 GPUs and around 36 exaFLOPS across dozens of countries.
“In the age of artificial intelligence, intelligence is not the bottleneck – it is energy,” said David Gurlé, co-founder, executive chairman and CEO of Antimatter.
“The infrastructure built for the first era of cloud and AI was designed around centralized scale. But the era of inference requires a different model: more distributed, faster to deploy, and supreme by design. That’s the infrastructure Antimatter is building.”
Much of the demand comes from inference workloads, where trained models run continuously inside copilots, automated services and real-time decision systems.
Smaller distributed facilities connected via shared software enable these systems to function as one network while keeping treatment physically closer to users.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



