OpenAI is trying to sound unstoppable by comparing its future AI compute buildout with Anthropic’s, but the pitch mostly underlines how dependent the AI boom still is on concrete, transformers, and money rather than some magical leap in intelligence. In a memo reported by Bloomberg, the company says it wants 30 gigawatts of compute by 2030, while putting Anthropic at seven to eight gigawatts by the end of 2027.

That is a giant number, and also a deeply unglamorous one. OpenAI says it had 1.9 gigawatts in 2025, versus Anthropic’s 1.4 gigawatts, so the company is not bragging about elegance or efficiency; it is bragging about scale. If the industry’s best sales pitch is ”we can pour more power into the furnace than our rival,” that is a sign the model race has become a utility race.

OpenAI’s AI compute arms race

The memo’s logic is blunt: more compute means more capable models, and OpenAI says its capacity is increasing ”rapidly and consistently.” It also argues that ”compute is now a product constraint,” which is another way of saying the company thinks raw infrastructure is the bottleneck between today’s chatbots and whatever comes next.

That claim lands differently when you look at the broader buildout. Bloomberg and Ed Zitron have reported that roughly half the U.S. data centers slated to open are delayed or canceled, with shortages of electrical components and rising costs slowing the entire boom. So while the top labs keep talking like a power surplus is around the corner, the hardware supply chain keeps acting like the adult in the room.

  • OpenAI target: 30 gigawatts by 2030
  • Anthropic target: seven to eight gigawatts by the end of 2027
  • OpenAI capacity in 2025: 1.9 gigawatts
  • Anthropic capacity in 2025: 1.4 gigawatts

Anthropic is selling discipline, not swagger

The timing of OpenAI’s memo looks pointed: it arrived after Anthropic showed off Claude Mythos, which staffers reportedly said was too capable and too risky for full release. Anthropic responded by leaning into a different identity, highlighting its recent Broadcom and Google deal and describing its approach as disciplined scaling. That is a neat contrast, and probably not an accidental one.

OpenAI, meanwhile, has made the opposite choice. It has said it plans to spend $600 billion on AI infrastructure through 2030, though that figure is less than half of what it had originally promised. That gap matters because the company is trying to reassure investors while also preparing for a rumored blockbuster IPO, and those audiences tend to dislike promises that keep shrinking when the bill comes due.

What the giant AI compute numbers actually mean

OpenAI’s memo also reveals how little the companies want to talk about algorithmic efficiency alone. The most ambitious AI firms are still behaving as if the winning formula is simple: train bigger models, feed them more infrastructure, then hope cost per token keeps dropping fast enough to justify the whole circus. That is not innovation theater. It is industrial-scale escalation.

The uncomfortable question is whether the market can keep financing this pace if the buildout keeps colliding with power constraints, component shortages, and delayed construction. OpenAI may be ahead on paper, but the real race is whether anyone can turn all that planned capacity into working, revenue-producing systems before the hype cycle runs out of patience.

Leave a comment

Your email address will not be published. Required fields are marked *