Sometimes Silicon Valley’s biggest secrets aren’t leaked by hackers. All you need is one debug file included by mistake in the npm package. That’s exactly how Anthropic lost control of its most closely guarded secret. Although the developer tried to quickly remove the leak, just like on the Internet – the Straisand effect kicked in and Antropic’s secrets appeared… on Github!
512,000 lines of code. One night. No retreat.
At 4 a.m. someone noticed an anomaly in the published npm package – a library intended for the Node.js environment. Inside sat a debug file containing 512,000 lines of code spread over 1,900 files – the complete internal architecture of Claude Code. The post hit X, garnered 16 million views, and was archived before Anthropic could do anything.
The disclosed data is not just anything. The leak exposed the full language model orchestration logic, agent coordination, permission system, and 44 hidden feature flags not yet publicly released. Among them there are real gems.
Kairos, Buddy and Undercover Mode – what could we read from Claude’s leak?
The package accidentally published by an Anthropi employee contains some very interesting features that are yet to come to Claude, such as:
“Kairos” – an always-active background daemon that stores memory and performs something the developers have named every night “dreaming” – knowledge consolidation process during system downtime.
“Buddy” – a Tamagotchi-style virtual pet with 18 species and rarity levels, reportedly scheduled for release… on April 1. What timing.
But it is an absolute hit “UndercoverMode” – a special subsystem designed to prevent Claude from accidentally revealing the internal names of Anthropic projects. His system prompt literally reads the instructions: “don’t blow your cover.” The anti-leak system leaked. Full of irony.
Extinguishing a fire with a teaspoon of water
There is also a serious legal question – is code co-created by AI subject to copyright at all? No one knows the answer yet, because lawyers still maintain that things created using AI do not have copyright.
Meanwhile, another drama is taking place in the background. In February 2026, the Pentagon announced that Anthropic would be considered a national security supply chain risk after the company refused to remove security restrictions from its use policy. On Polymarket, the chance of Anthropic settling with the Pentagon is currently 18%.