OpenAI is telling investors that its most expensive habit – building far more computing capacity than many rivals – is now a competitive weapon against Anthropic. The pitch lands at an awkward moment for the Claude maker: demand for AI products is still rising fast, Anthropic is broadening its infrastructure plans, and the race is shifting from model quality alone to who can actually serve users at scale.

In its latest investor memo, OpenAI says its compute buildout is pulling ahead of Anthropic’s, with 1.9 gigawatts of capacity available in 2025 and a path toward roughly 30 gigawatts by 2030. Anthropic, meanwhile, is also expanding quickly, but OpenAI is framing the gap as a real advantage in the AI infrastructure race.

That argument is as much about bragging rights as it is about servers. OpenAI says compute has become a product constraint, which is a polite way of saying the company with more chips can ship more features, reach more customers, and avoid the sort of service bottlenecks that make AI look less magical and more like a traffic jam.

OpenAI’s compute buildout is moving fast

In a memo sent to some investors, OpenAI said it had 1.9 gigawatts of computing capacity available in 2025, up from the year before. It expects that figure to rise to the low-double-digit gigawatt range next year and reach roughly 30 gigawatts by 2030. A gigawatt is enough to power roughly 750,000 US homes, which gives you a sense of the scale: this is less ”startup” and more ”utility company with an AI problem.”

The company also framed Anthropic as trailing, estimating that its rival ended 2025 with 1.4 gigawatts and will have between 7 and 8 gigawatts next year. OpenAI said its own ramp is ”materially ahead and widening,” a line that reads like investor reassurance and competitive trash talk rolled into one.

Anthropic is spending to catch up

Anthropic has not been standing still. It has committed to spend $50 billion on data centers in the US and recently expanded a strategic collaboration with Broadcom and Alphabet’s Google to access about 3.5 gigawatts of computing power beginning in 2027. The company also works with Google, Microsoft, and Amazon, giving it a broader supplier mix than OpenAI’s memo suggests.

That said, Anthropic has generally been the more cautious spender of the two. OpenAI, by contrast, plans to spend about $600 billion on data centers and chips by 2030 and recently raised $122 billion in funding to help cover those commitments. The upside is obvious: more compute, more scale, more room to grow. The downside is equally obvious: the bill arrives long before the victory lap.

The real fight is service reliability

This is not just a theoretical tug-of-war over model training. Anthropic has had trouble keeping services online during demand spikes, and OpenAI pointed to analyst commentary suggesting compute limits may have influenced Anthropic’s decision to restrict Mythos to select partners. OpenAI’s underlying message is simple: in AI, scarcity is a business flaw, not a badge of discipline.

There is a catch, though. OpenAI itself is under pressure to keep spending from outrunning reality, and on Thursday it said it was pausing a planned infrastructure project in the UK because of energy costs. So the company is selling investors on a future where bigger is better while quietly admitting that bigger can still get expensive enough to blink at.

What happens if demand keeps outrunning supply

Anthropic’s next move will probably be more of the same: more contracts, more capacity, more reassurance that it can scale without burning cash just for sport. But OpenAI has the stronger near-term story if demand keeps climbing as fast as it says, because the company that can deliver products without throttling them gets the market advantage, even if the balance sheet groans a little on the way there.

Source: Bloomberg

Leave a comment

Your email address will not be published. Required fields are marked *