Business

Anthropic explains Claude Code’s recent performance decline after weeks of user backlash

Posted on


The latest admission, which came after weeks in which Anthropic had initially implied in its communications that nothing was wrong and that users were largely to blame for any performance problems and later said some of the changes had been made for users’ benefit, has done little to calm Anthropic’s customers—some of whom say they have already cancelled their subscriptions.

The feeling among some users that Anthropic had been gaslighting them potentially undercuts Anthropic’s attempts to market itself as more transparent and aligned with its users than rival OpenAI. Nor has the admission that there were performance problems done much to quell rampant speculation that the company is running short of computing resources and that Anthropic’s efforts to ration precious computing power were the real reason for the performance issues.

“Demand for Claude has grown at an unprecedented rate, and our infrastructure has been stretched to meet it, particularly at peak hours,” Anthropic said in a statement to Fortune. “We are doing everything we can to address this and we are deeply grateful for our users’ patience.”

The statement went on to say that “compute is a constraint across the entire industry, and we are scaling our compute rapidly and responsibly—including through a recently announced expansion of our partnership with Amazon and Google, which will bring significant new capacity online in the coming months. Our priority is getting that capacity into our users’ hands as quickly as possible.”

The company also pushed back on any characterization that it had not been transparent with its users about the issues impacting Claude Code. “The Claude Code issues had specific technical causes that we documented in full in our postmortem, and the fixes are now shipped,” the statement said.

Anthropic has built much of its recent success on the loyalty of developers. Its Claude Code tool, launched in early 2025, has been popular with solo developers and enterprise engineering teams. The runaway success of the tool has helped send the company’s annualized recurring revenue run rate to $30 billion—more than triple its figure at the end of last year. However, the weeks-long performance decline and the lab’s slow response to user complaints, as well as several changes that users argue amount to stealth price hikes, is testing that loyalty.

The controversy could dent Anthropic’s bottom line amid an increasingly bitter race with rival OpenAI. The issues also come at a critical time, with both companies reportedly gearing up for initial public stock offerings later this year. 

Following widespread complaints about Claude Code’s performance, executives representing the AI lab initially said the performance issues were the result of changes it had made to improve latency and in response to user feedback about token use. It said both changes were communicated via its public changelog—a running list of updates available to users. On Thursday, however, Anthropic went further, publishing a detailed engineering post acknowledging that three separate engineering missteps were behind the performance issue. In an effort to respond to some of the user complaints, the lab also said it would reset usage limits for all subscribers.

Anthropic’s newest admission is likely to increase already widespread speculation that the lab may be suffering from a compute strains after use of its products soared in the past few months. 

Beyond the performance issues with Claude Code, the AI lab has also suffered a series of outages as usage has surged, introduced usage caps limits during peak hours, and is limiting the roll-out of its newest, larger, and more expensive model, Mythos, to a select group of large firms. (Anthropic has said that the model’s cautious roll-out is due to the security risks posed by the model’s unprecedented cyber capabilities.)

The company’s rivals have also furthered rumors that the lab may be lacking the compute needed to maintain its recent customer surge. In an internal memo first reported by CNBC, OpenAI’s revenue chief claimed Anthropic had made a “strategic misstep” by failing to secure sufficient compute, and was “operating on a meaningfully smaller curve” than its competitors. Anthropic has also notably announced fewer multibillion-dollar deals for data center capacity than some of its rivals like OpenAI. While other AI companies are also facing compute constraints, Anthropic appears to be in the most difficult position, having grown far faster than it likely anticipated.

Anthropic declined to answer CNBC’s questions about the memo. Anthropic has also publicly stated it does not purposely degrade the performance of its Claude models.

The company also appears to be testing potential ways to limit new access to Claude Code. Earlier this week, Anthropic updated its pricing page for some users to show Claude Code as unavailable on the company’s $20-a-month Pro plan. Anthropic’s head of growth later said the change had been a test on around 2% of new sign-ups, adding that usage patterns had “changed fundamentally” since the plans were designed. Separately, The Information reported that Anthropic had shifted its enterprise pricing to a consumption-based model, a move one analyst estimated could potentially triple costs for heavy users.

User backlash

Anthropic has been dealing with significant backlash from some of its power users over Claude Code’s recent performance issues. Several have said they have cancelled subscriptions, cybersecurity professionals have warned of potentially dangerously degraded code quality, and a senior AI executive at AMD has called the tool “unusable for complex engineering tasks.” 

Users have complained of feeling “gaslit” by the company’s response to their ongoing complaints about the coding tool’s performance. One X user said in response to Anthropic’s recent post: “After they gaslit users and pretended nothing was wrong, countless complaining from tonnes of people here and elsewhere, cancellations, Anthropic finally admit on the day GPT-5.5 releases there is a problem with Claude.”

“I appreciate the post-mortem, but I don’t trust that all issues have been resolved. Claude Code, in general, has been barely usable for me in the past couple of days,” another said.

In a post published to its engineering blog on Thursday, Anthropic said it had traced the problems to three distinct changes. The first, rolled out on March 4, reduced Claude Code’s default reasoning effort from “high” to “medium” to cut latency—a tradeoff the company said in the blogpost was the wrong one. The second change, shipped on March 26, contained a bug that caused the model to continuously discard its own reasoning history mid-session, making it appear forgetful and erratic, and draining users’ usage limits faster than expected. The third, introduced on April 16, added a system prompt instruction capping the model’s responses at 25 words between tool calls—a change Anthropic said measurably hurt coding quality before it was reverted four days later. 

Anthropic noted that all three issues were resolved as of April 20, with the API unaffected throughout. On April 23, the company reset usage limits for all subscribers.

The company acknowledged users’ frustration with the tool, saying: “This isn’t the experience users should expect from Claude Code.” The lab as also promised greater transparency around changes to Claude Code in the future.

Despite Anthropic’s public acknowledgement, some users have taken to social media to express their frustrations with the lab’s initial response to users concerns about Claude’s performance.

“The frustrating part is that the Claude Code team, along with people deep in AI psychosis, have been gaslighting anyone who raises concerns about Claude Code’s recent issues,” Muratcan Koylan, a member of technical staff at Sully.ai, said in a post on X. “When you’re paying a lot of money for a product and it actually makes your job harder, to the point where people make you start questioning the quality of your own work, it really becomes a problem.”

The backlash risks pushing some of Anthropic’s power users toward rival OpenAI, whose recent Codex models have also been popular with developers. On Thursday, OpenAI also launched GPT-5.5, its newest AI model, to paid subscribers. The company said it now had 4 million active Codex users, 9 million paying business customers, 900 million weekly active users on ChatGPT, and more than 50 million subscribers. Anthropic has not published comparable user figures. The company has disclosed business metrics, including more than 300,000 enterprise customers, but has not released subscriber or active user numbers. Independent app and web traffic site SimilarWeb has reported that active monthly users of Anthropic’s Claude app hit 20 million by the end of February and that user growth had more than doubled month-over-month in March.

The issues with Claude Code appear to have significantly affected the quality of code produced by Anthropic’s tools in the last month or so, especially when compared to OpenAI’s offerings.

Analyses from coding security company Veracode found that Claude Opus 4.7, Anthropic’s newest Claude model, which launched on April 16, introduced a vulnerability in 52% of coding tasks tested—up from 51% for Opus 4.1 and 50% for the lower-cost Claude Sonnet 4.5. Veracode found OpenAI’s models performed notably better, introducing vulnerabilities in around 30% of tasks.

Dave Kennedy, CEO of cybersecurity firm TrustedSec and a former U.S. Marine Corps intelligence officer, told Forbes his team had measured a 47% drop in Claude’s code quality, tracking defects, security issues, and task completion rates. The risk, Kennedy warned, is that novice developers using Claude won’t catch the flaws, “introducing serious defects” into production code.

In response to the newest post from Anthropic, Kennedy said: “I’m glad they are trying to address this, but a month to get this out is crummy.”



Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Exit mobile version