Some of Anthropic’s most vocal users are convinced the Claude default AI experience has gotten worse, not better. Across X, GitHub, and Reddit, power users say the chatbot now feels less precise and less useful for demanding coding and research work, just as Anthropic is pushing ahead with newer frontier models and higher-end offerings.
The complaint is simple enough: the default experience seems to have lost some of its spark. That can happen after a configuration tweak, or after users adjust to a system that was never as magical as it first appeared. In this case, the backlash is landing in a very specific place: among people who notice subtle regressions fast, and who tend to be the loudest when a tool starts missing the mark.
Users say Claude lost precision
The online evidence is messy but persistent. People are posting side-by-side outputs, benchmarks, and prompt tests in an attempt to prove that Claude now gives flatter, less accurate answers than before. One widely shared complaint came from an AMD senior director on GitHub, who said Claude had ”regressed to the point it cannot be trusted to perform complex engineering.”
That kind of reaction is not just internet drama. For developers, a small drop in reasoning quality can turn into hours of rework, and for researchers, it can mean missing nuance in exactly the places where the model used to shine. The problem is that once a tool becomes part of a workflow, ”good enough” is judged against yesterday’s version, not a generic benchmark.
Anthropic says it changed Claude Code settings
Anthropic says the online backlash is tied to changes in the default level of reasoning in Claude Code. The company denies that the shift was caused by compute shortages or by any effort to divert resources toward Mythos, the more powerful model it is testing.
In a post on X from March 6, Boris Cherny, who leads Claude Code, said users can switch the setting at any time in the model selector and choose between lower effort for speed or higher effort for more intelligence. That explanation may be technically neat, but it also confirms the core grievance: many users do not want to babysit the model just to get the version they thought they were already paying for.
The real fight is over default access
There is a bigger pattern here, and it is not unique to Anthropic. As AI systems improve, the best behavior is increasingly being packaged behind paywalls, usage caps, and special programs, while the everyday default experience becomes more tightly managed. OpenAI, Google, and others have all been segmenting access in similar ways, because frontier inference is expensive and nobody is eager to hand out unlimited intelligence for free.
That is why the Claude backlash stings. Anthropic is reportedly close to upgrading its high-end Opus model to version 4.7, and it recently moved large enterprise customers to usage-based token pricing, tying capability more directly to spend. The message is clear enough even if the marketing is not: the best AI is still getting better, but the best version may be reserved for the people willing to pay for it.
- Power users say Claude’s default responses now feel less reliable for coding and research.
- Anthropic says the issue is a reasoning setting in Claude Code, not a compute crunch.
- The broader trend is AI stratification: premium models for heavy users, downgraded defaults for everyone else.
Claude users worry about the next default setting
The awkward part for Anthropic is that perception can harden faster than a model can be retrained. If enough developers decide Claude is inconsistent, they will not wait around for a better explanation, especially when rivals are one tab away. The company may be right that this is a settings dispute, but the market often punishes products for what users feel, not what the release notes say.
What happens next is straightforward to watch: whether Anthropic keeps giving users more explicit control over effort levels, or whether the default experience quietly continues to drift downward while the headline models get smarter. If the gap between ”best available” and ”what most people actually get” keeps widening, the backlash around Claude may turn out to be a preview, not an outlier.

