Hopper
H100
The baseline high-end accelerator that became a core reference point for AI compute pricing. Key idea: strong general-purpose AI capacity.
Learn
How accelerator generations change performance, supply, and cost.
H100, H200, and B200 are NVIDIA data-center accelerators used in advanced AI workloads. They belong to different points in the product cycle, and the differences between them matter because chip generation affects memory, performance, workload fit, pricing, and availability.
All three are high-end AI accelerators, but they do not deliver the same capacity.
A more expensive GPU-hour can still be cheaper per completed workload.
At a glance
Hopper
The baseline high-end accelerator that became a core reference point for AI compute pricing. Key idea: strong general-purpose AI capacity.
Hopper
A Hopper-generation step-up with much larger and faster memory for memory-heavy AI workloads. Key idea: better fit for larger models and memory-sensitive workloads.
Blackwell
The next-generation Blackwell accelerator, pushing the performance and memory frontier higher again. Key idea: a new generation that can shift workload economics and market expectations.
Example
A GPU that costs more per hour can still be cheaper per completed workload if it finishes the job faster, supports a larger model more efficiently, or reduces the number of chips required.
Takeaway
Cost per useful work is the better comparison.
Chip generations
Why it matters
Common mistake
A lower hourly rate does not automatically mean lower compute cost. The right comparison is whether a chip can complete the required workload at the needed speed, scale, and total cost.
Price
What access costs per unit of time.
Performance
How much useful work the chip can complete.
Fit
Whether the chip is well suited to the model and task.
Keep learning
Concept
The basic resource behind training and running AI models.
Unit
The basic unit behind compute pricing.
Market
How the market price of AI compute capacity is expressed and compared.