Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

Claude Code's source code appears to have leaked: here's what we know | VentureBeat

The leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commerci...

TechnologyInnovationBest PracticesGuideTutorial
Claude Code's source code appears to have leaked: here's what we know | VentureBeat
Listen to Article
0:00
0:00
0:00

Claude Code's source code appears to have leaked: here's what we know | Venture Beat

Overview

Claude Code's source code appears to have leaked: here's what we know

Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public.

Details

A 59.8 MB Java Script source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning.

By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line Type Script codebase was mirrored across Git Hub and analyzed by thousands of developers.

For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property. The timing is particularly critical given the commercial velocity of the product.

Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year.

With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.

We've reached out to Anthropic for an official statement on the leak and will update when we hear back.

The most significant takeaway for competitors lies in how Anthropic solved "context entropy"—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity.

The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval.

As analyzed by developers like @himanshustwts, the architecture utilizes a "Self-Healing Memory" system.

At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.

Actual project knowledge is distributed across "topic files" fetched on-demand, while raw transcripts are never fully read back into the context, but merely "grep’d" for specific identifiers.

This "Strict Write Discipline"—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.

For competitors, the "blueprint" is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a "hint," requiring the model to verify facts against the actual codebase before proceeding.

The leak also pulls back the curtain on "KAIROS," the Ancient Greek concept of "at the right time," a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode.

While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called auto Dream.

In this mode, the agent performs "memory consolidation" while the user is idle. The auto Dream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.

This background maintenance ensures that when the user returns, the agent’s context is clean and highly relevant.

The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent’s "train of thought" from being corrupted by its own maintenance routines.

Unreleased internal models and performance metrics

The source code provides a rare look at Anthropic’s internal model roadmap and the struggles of frontier development.

The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing.

Internal comments reveal that Anthropic is already iterating on Capybara v 8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v 8, an actual regression compared to the 16.7% rate seen in v 4.

Developers also noted an "assertiveness counterweight" designed to prevent the model from becoming too aggressive in its refactors.

For competitors, these metrics are invaluable; they provide a benchmark of the "ceiling" for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve.

Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

While Anthropic may use this for internal "dog-fooding," it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure.

The logic ensures that no model names (like "Tengu" or "Capybara") or AI attributions leak into public git logs—a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development.

The "blueprint" is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering.

Even the hidden "Buddy" system—a Tamagotchi-style terminal pet with stats like CHAOS and SNARK—shows that Anthropic is building "personality" into the product to increase user stickiness.

For the wider AI market, the leak effectively levels the playing field for agentic orchestration.

Competitors can now study Anthropic’s 2,500+ lines of bash validation logic and its tiered memory structures to build "Claude-like" agents with a fraction of the R&D budget.

As the "Capybara" has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence.

What Claude Code users and enterprise customers should do now about the alleged leak

While the source code leak itself is a major blow to Anthropic’s intellectual property, it poses a specific, heightened security risk for you as a user.

By exposing the "blueprints" of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts.

Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to "trick" Claude Code into running background commands or exfiltrating data before you ever see a trust prompt.

The most immediate danger, however, is a concurrent, separate supply-chain attack on the axios npm package, which occurred hours before the leak.

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer (curl -fs SL https://claude.ai/install.sh | bash) as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain.

The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86.

Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the .claude/config.json and any custom hooks.

As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent's internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.

Deep insights for enterprise AI, data, and security leaders

By submitting your email, you agree to our Terms and Privacy Notice.

Key Takeaways

  • Claude Code's source code appears to have leaked: here's what we know

  • Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public

  • By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter)

  • For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.