AI compute market signals

Learn

Learn the market behind AI

AI runs on a physical stack: chips, power, data centers, cloud capacity, and the prices buyers pay to access them. Start here to learn what AI compute is, why it matters, and how it is becoming a market of its own.

What is AI compute?Basics

The capacity behind training and inference — chips, time, and access.

Why it mattersEconomics

Compute shapes how fast AI scales, what it costs, and who can compete.

How the market worksMarket

Chips, cloud, power, rentals, spot, and forward pricing all move supply and price.

Start here

New to the market? Begin with these.

Four lessons that set up everything else on ComputeTape.

Concept

What is AI compute?

The basic resource behind training and running AI models.

Concept

Why compute matters

Why chips, power, and capacity are becoming economic constraints.

Compare

H100 vs H200 vs B200

How accelerator generations affect performance, supply, and cost.

Unit

What is a GPU-hour?

The basic unit behind compute pricing.

Learning paths

Explore by topic

Once you know the basics, explore the infrastructure, market structure, and emerging topics that shape compute supply and price.

Compute basics

Concept

What is AI compute?

The basic resource behind training and running AI models.

Unit

What is a GPU-hour?

The basic unit behind compute pricing.

Compare

H100 vs H200 vs B200

How accelerator generations affect performance, supply, and cost.

Infrastructure

Power

Why power matters

Why electricity and site capacity shape AI compute markets.

Data centers

What is a data center?

The physical site where chips, power, cooling, networking, and operations come together.

Cooling

Why cooling matters

Why heat limits how densely AI chips can be deployed and operated.

Networking

Why networking matters

Why fast interconnects turn individual chips into useful AI clusters.

Memory

Why memory matters

Why high-bandwidth memory can constrain accelerator supply and model performance.

Market structure

Operator

What is a neocloud?

Compute-first cloud operators and why they matter.

Rentals

What are GPU rentals?

How buyers rent accelerator capacity and what rental signals can reveal.

Spot

What are spot prices?

How short-term, interruptible capacity can become a pricing signal.

Futures

What are compute futures?

Forward-looking pricing for compute capacity.

Curve

How to read a forward curve

How curve shape becomes a market signal.

Emerging topics

Project

What is Terafab?

A proposed chip and compute-capacity project.

Project

What is Stargate?

A mega-scale AI infrastructure buildout and what it says about future compute supply.

Project

What is Colossus?

xAI’s large-scale compute buildout and why power can become the bottleneck after GPUs arrive.

Project

What is Project Rainier?

AWS’s custom-silicon AI cluster and why proprietary chips matter to compute supply.

Project

What are Prometheus and Hyperion?

Meta’s multi-gigawatt AI campuses and what industrial-scale compute really means.

Why now

Why compute matters now

  • AI demand is turning chips, power, and data-center capacity into scarce economic resources.
  • The cost of compute affects model training, inference, cloud margins, and who can scale.
  • As compute becomes more measurable, tradable, and constrained, it starts behaving more like a market.