There is a version of the Capybara story that reflects well on Anthropic.
Last week, details of an unreleased AI model, along with internal documents, surfaced in a publicly accessible data cache. It was embarrassing on the surface.
But look at it from the perspective of an investor relations team preparing a company for one of the most anticipated technology IPOs in years, and the picture shifts.
A well-timed glimpse of what is coming, dropped into the market at a moment when Anthropic needs institutional investors to believe the product pipeline is deep, is not obviously a bad thing. Accidental or not, the Capybara leak said: There is more where Claude came from.
The Claude Code leak says something else entirely.
What the source code exposure actually means
Source code is not a press release (or banking ruse to add an additional hundred billion to the valuation). FFS, this is the architectural blueprint of a product.
So, when Anthropic confirmed on Tuesday that internal source code for Claude Code had been exposed through what it described as a release packaging error caused by human error, this was a big deal.
Put simply, it was handing rivals, and potentially bad actors, a detailed look at how one of the fastest-growing AI developer tools in the world actually works.
Claude Code reached a run-rate revenue of more than $2.5 billion as of February. OpenAI, Google and xAI have all moved to build competing products directly in response to its success.
Those competitors now have access to information Anthropic spent considerable engineering effort producing. The company's spokesperson was quick to note that no customer data or credentials were exposed. That is true, and it matters. But it is not the point.
The question of institutional discipline
What the Claude Code leak raises, in a way the Capybara incident did not, is a straightforward question about basic competence. A release packaging error is not a sophisticated attack.
It is not a zero-day exploit or a social engineering campaign. It is a basic process failure, the kind that mature technology companies with serious data governance frameworks are supposed to catch before code reaches a public repository.
Anthropic has grown at warp speed. It was founded in 2021 by a group of former OpenAI researchers and executives, has attracted billions in investment from Google and Amazon, and is now widely expected to pursue a public listing.
That growth trajectory creates pressure on every part of the organisation, engineering, legal, compliance, and the unglamorous but essential work of release management.
Two significant data incidents in under a week suggest the internal infrastructure has not expanded at the same rate as the valuation.
The bad actor problem
Beyond the competitive implications, there is a more troubling dimension. A post on X linking to the leaked code drew more than 21 million views within hours of going live.
The audience for that link was not limited to developers curious about how Claude Code was built. Security researchers have long warned that source code exposure creates attack surfaces.
Anyone looking for weaknesses in a system starts with the code. Anthropic's product sits inside the development workflows of a growing number of companies. Those companies now face a period of heightened uncertainty about what, exactly, the exposure means for their own security posture.
A footnote that could become a chapter
Individually, either incident might be absorbed without lasting damage. Leaks happen. Companies recover. The Capybara story carried enough ambiguity to be managed.
But two incidents in seven days create a pattern, and patterns attract attention, particularly from the institutional investors Anthropic is working to court and the enterprise clients it increasingly depends on for revenue.
The question Anthropic's leadership, led by Daniel Amodei, now has to answer is not whether this was a security breach in the technical sense. Its own statement addresses that.
The question is whether a company preparing to enter public markets has the internal controls and operational culture that the moment requires. On the evidence of the past week, that case is harder to make than it was a fortnight ago.