How the currency of the AI economy actually works
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios; Stock: Getty Images
The AI companies locked in a blistering competition for dominance are running into a roadblock that's threatening to stunt their meteoric rise: scarce "compute."
Why it matters: Rivals are making unprecedented deals and forming unlikely alliances to solve this issue, injecting fresh intrigue into the AI race that could define the coming years.
What is compute capacity?
Compute capacity refers to the hardware processing power, networking and storage needed to process vast amounts of data and train or serve AI models, largely through graphics processing units, or GPUs.
- Accelerator chips like Nvidia GPUs sit at the center of AI processing, and heightened demand during the industry's rapid buildout has left many companies with limited compute access.
What AI companies buy with compute
AI labs are operating at a scale where compute procurement increasingly resembles industrial infrastructure, as they buy the hardware, energy and processing time required to train, run and scale AI models.
- Some AI companies like Anthropic have faced limited compute, which can downgrade the experience for customers.
Why is AI production so costly?
It's not just chips that are needed to drive AI.
Zoom in: AI production requires high-speed networking, storage, power delivery infrastructure, cloud-platform access, chip equipment makers and lasers to conduct chips, professor Daswin De Silva, deputy director of the Centre for Data Analytics and Cognition at La Trobe University in Australia, tells Axios.
- A big problem AI firms face is the limited options for semiconductors, with Taiwan Semiconductor Manufacturing Co. (TSMC) having "almost a monopoly," De Silva says Thursday in a phone interview.
- "That's the hardware, but if you look at the consumables, energy and water, and that is also significant limiting factor for new data center projects."
Meanwhile, AI companies must secure processing time and sufficient data storage, AI professor Ali Knott of Victoria University in Wellington, New Zealand, tells Axios.
Why data centers are factories for AI
A data center is built to train, host or influence models, De Silva says.
Reality check: When companies like Anthropic and OpenAI sign partnership deals, they're not buying data center real estate directly. They're buying key components like reserved GPU capacity, networking bandwidth and storage.
- Companies running heavy AI workloads increasingly rely on data center providers that allow them to rent out space for their own hardware, known as colocation providers.
- Demand for high-density power and liquid cooling is pushing some firms away from building their own data centers, and colocation providers house GPU-dense infrastructure.
How AI companies procure chips
Some companies buy large quantities of chips directly.
Case in point: Meta is among Nvidia's largest customers for chips, enabling the parent of Facebook and Instagram to increase computing power.
- The tech giant owns most of its data centers, which house networked computers, servers and storage systems, though it's moving toward a hybrid model that heavily uses leasing to fund AI expansion.
Zoom out: Firms like Meta, Microsoft, Alphabet (Google) and Amazon (AWS) preorder Nvidia chips years ahead with an eye on future supply.
- Knott says in a Thursday phone interview that Nvidia has carved out a nice niche, but "other companies are all in a more risky situation because it could be that they get surpassed."
- He adds that while AI firms may lose market share, the chip manufacturers should benefit "no matter who wins the AI race to build the best model."
The bottom line: In the AI race, compute capacity is becoming as strategically important as the models themselves.
Go deeper: Why an AI productivity boom could justify higher rates
