Concept
What is AI compute?
The basic resource behind training and running AI models.
Learn
AI runs on a physical stack: chips, power, data centers, cloud capacity, and the prices buyers pay to access them. Start here to learn what AI compute is, why it matters, and how it is becoming a market of its own.
The capacity behind training and inference — chips, time, and access.
Compute shapes how fast AI scales, what it costs, and who can compete.
Chips, cloud, power, rentals, spot, and forward pricing all move supply and price.
Start here
Four lessons that set up everything else on ComputeTape.
Concept
The basic resource behind training and running AI models.
Concept
Why chips, power, and capacity are becoming economic constraints.
Compare
How accelerator generations affect performance, supply, and cost.
Unit
The basic unit behind compute pricing.
Learning paths
Once you know the basics, explore the infrastructure, market structure, and emerging topics that shape compute supply and price.
Concept
The basic resource behind training and running AI models.
Unit
The basic unit behind compute pricing.
Compare
How accelerator generations affect performance, supply, and cost.
Power
Why electricity and site capacity shape AI compute markets.
Data centers
The physical site where chips, power, cooling, networking, and operations come together.
Cooling
Why heat limits how densely AI chips can be deployed and operated.
Networking
Why fast interconnects turn individual chips into useful AI clusters.
Memory
Why high-bandwidth memory can constrain accelerator supply and model performance.
Operator
Compute-first cloud operators and why they matter.
Rentals
How buyers rent accelerator capacity and what rental signals can reveal.
Spot
How short-term, interruptible capacity can become a pricing signal.
Futures
Forward-looking pricing for compute capacity.
Curve
How curve shape becomes a market signal.
Project
A proposed chip and compute-capacity project.
Project
A mega-scale AI infrastructure buildout and what it says about future compute supply.
Project
xAI’s large-scale compute buildout and why power can become the bottleneck after GPUs arrive.
Project
AWS’s custom-silicon AI cluster and why proprietary chips matter to compute supply.
Project
Meta’s multi-gigawatt AI campuses and what industrial-scale compute really means.
Why now