OpenAI has rolled out GPT-5.5, codenamed ”Spud,” and the timing is hard to miss: it lands just a week after Anthropic’s latest model. The pitch is familiar but sharper this time – more capability, less hand-holding, and lower friction for businesses that want AI to do real work instead of just chatting back.

OpenAI says GPT-5.5 is strongest in coding, computer use, office work, and early scientific research. That matters because those are exactly the categories where AI vendors are trying to move beyond novelty and into repeatable enterprise value. The company says GPT-5.5 can handle messy, multi-part jobs, plan its own steps, use tools, and check its own work with less user input than before.

GPT-5.5 availability and access

The model is available Thursday in ChatGPT and Codex for paid subscribers. API access is coming soon, but OpenAI says it first needs to finish adding cybersecurity guardrails. That delay is pretty revealing: the industry loves talking up agentic AI, then immediately remembers that handing software a longer leash can create messier failure modes.

OpenAI co-founder Greg Brockman described GPT-5.5 as ”a new class of intelligence” and said it is a ”faster, sharper thinker for fewer tokens” than GPT-5. The company also says it matches GPT-5.5’s response speed in real-world use, which is the sort of claim that will only matter if users actually feel the difference while pushing the model through real workloads.

Why enterprises care about token costs

For companies, the bigger story is not just capability. Nvidia says its new chips cut the cost of running advanced AI by up to 35x per token, and that math is what turns AI from a demo into a budget line item. OpenAI, which trained the model on Nvidia GPUs, is clearly aiming at customers who want more automation without watching their IT spend balloon.

  • Available in ChatGPT and Codex for paid subscribers
  • API access coming soon
  • Best gains are in coding, computer use, office work, and early scientific research
  • Designed to tackle multi-step workflows with less prompting

OpenAI says early access teams used the model to review large document sets and save up to 10 hours a week. That kind of claim is classic vendor material, but the direction is clear: the value proposition is shifting from ”ask me anything” to ”delegate something.”

OpenAI’s enterprise push gets more obvious

The release also fits OpenAI’s recent strategic mood. Its executives reportedly treated Anthropic’s rise as a wake-up call, and the answer now looks like a harder push into business adoption rather than a race for the flashiest chatbot headline. Nvidia vice president of enterprise computing Justin Boitano said the companies worked on ”a blueprint” to make deployment easier for enterprises, which is exactly the sort of plumbing work that decides whether AI spreads or stalls.

Brockman framed the shift even more broadly, arguing that the world is moving toward a ”compute-powered economy.” That is bold, but not absurd: as models take on longer tasks and more of the execution layer, the economics of running them may end up mattering as much as raw benchmark gains. The next question is whether enterprises buy that pitch at scale – or keep waiting until the guardrails feel less like an apology and more like a feature.

Source: Axios

Leave a comment

Your email address will not be published. Required fields are marked *