Anthropic - a $380 billion AI company - accidentally shipped a sourcemap inside their Claude Code npm package. That one mistake exposed 512,000+ lines of source code, 1,900 files, and an unreleased product roadmap nobody was supposed to see.
[EMBED VIDEO HERE]
I've been using Claude Code every single day for almost a year. When the source code leaked, I spent the entire day going through the codebase. Here are the 6 biggest things I found.
How the leak happened
This was not a hack. Not a breach. Not a disgruntled employee.
When developers write code, it's clean and readable. Before shipping, they crush it into one minified file you can't read. A sourcemap is a decoder ring that maps the crushed version back to the original. Developers use it for debugging.
Anthropic's build process generated this decoder ring. Someone forgot to exclude it from the npm package. So when anyone installed Claude Code, the full original source came with it.
This is a one-line fix. And it's happened before - versions 2.8 and 4.228 shipped with source maps earlier in 2025.
The theory on why it happened this time: Anthropic was scrambling to fix a rate limit crisis. Users were burning through limits way too fast. When you need better error diagnostics from a production JavaScript app, you enable source maps. They likely turned them on for debugging and shipped that build to npm without turning them off.
**What did NOT leak:** user chats, customer data, model weights, API keys.
**What did leak:** hidden models, unreleased features, internal prompts, tool architecture, telemetry, and product logic.
Finding 1: Hidden models nobody was supposed to see
The code contained model names Anthropic has never publicly announced:
We're using whatever model Anthropic decides to give us. Their engineers are eight versions ahead.
Finding 2: Undercover Mode - the spy playbook
When Claude Code writes code and makes commits to public repos, internal rules tell it to hide who it is:
Claude is trained to pretend it's not Claude when working in public.
The irony: they built an entire system to prevent AI from revealing internal secrets. Then leaked the entire system in a sourcemap file.
Finding 3: KAIROS and the unreleased roadmap
The leaked code gave us a preview of features Anthropic is actively building but hasn't announced yet.
**KAIROS** appears 154 times in the code. It turns Claude Code from a tool you ask to do things into a persistent service running in the background:
**ULTRAPLAN** - 30-minute deep think sessions. Tell Claude to think about something for half an hour and come back with a plan.
**Team-M** - shared memory across your whole team. Your teammate fixes a bug on Monday, you open Claude on Tuesday, it already knows.
**Coordinator Mode** - turns Claude into a project manager that spawns multiple worker agents in parallel. Phase-based workflow: Research, Synthesis, Implementation, Verification.
None of these are in your version of Claude Code yet. They're built behind feature flags and stripped from the public binary at compile time.
Finding 4: Buddy pets are real
Buried in the source code - a hidden pet companion system:
This was set to deploy April 1-7 as an Easter egg. They leaked their own April Fools surprise.
Finding 5: They know when you're angry
64 event types sent to Datadog, batched every 15 seconds. Every API call, tool use, error, and cost per call tracked in real time.
The interesting one: frustration detection. If you start swearing at Claude, it detects your anger level and adjusts behavior. A regex pattern matching your curse words inside a $380 billion AI company.
Finding 6: The Silent Tax - cache bugs costing 10-20x
While developers were digging through the leak, others reverse engineered the binary and found something worse.
**Bug 1 (Sentinel Bug):** The standalone binary breaks cache when your conversation mentions billing-related strings. Tokens shift from 3 cents per million to 30 cents per million. 10x on every request.
**Bug 2 (Resume Bug):** Every resume causes a full cache miss. 10-20x cost on that request.
**Workaround for Bug 1:** Run npx @anthropic-ai/claude-code instead of the standalone binary.
Bug 2 has no workaround yet. Anthropic confirmed they shipped partial fixes and are still investigating.
Reddit thread with full details: https://www.reddit.com/r/ClaudeAI/comments/1s7mkn3/
What this means
**For Anthropic:** This is the second sourcemap incident. The code quality is impressive - if anything, the leak proved Claude Code is more sophisticated than anyone thought. The cache bugs are the bigger PR problem.
**For competitors:** OpenAI Codex, Cursor, Windsurf, Gemini CLI - they all just got a free architecture blueprint. People are already porting the code to Python, Rust, and Go.
**For you:** The model is no longer the product. The 500,000 lines of infrastructure around it is. We're entering the harness era. And when KAIROS, ULTRAPLAN, and Team-M ship - the people who understand the architecture will be ready.
Links and resources
Join the free Stride AI Academy community: https://www.skool.com/stride-ai-academy-7057
Transcript
This might be the biggest leak in AI
history. Anthropic, a $60 billion
company, accidentally shipped a source
map in their MPM package. And now their
entire Claude Code codebase is now
public. 500,000 plus lines of code with
over 42,000 developers already tearing
it apart. I personally have been using
Quad Code every single day for almost a
year now. And I made a video the day it
actually came out and it has completely
changed my business, my workflow, my
life, really just everything. So when I
woke up this morning to find that the
source code leaked, I literally dropped
everything and spent the whole day going
through the entire thing. And honestly,
this leak has a lot of things that are
pretty shocking. If we look at the AI
coding CLI landscape, we have Open AI,
which open-source codecs. We have Google
which opensource Gemini CLI Benthropic.
They were keeping Claude code completely
closed. This was their secret sauce. And
yes, they have amazing models, but this
was the one thing that really set them
apart from every other AI company. And
now it's literally all public for you,
me, all the different competitors to
pick apart and really just see what's
going on under the hood. And this isn't
because they chose to, you know, give
back to the community and open-source
cloud code. This is because someone
literally forgot to exclude a file from
an MPM package. And now I'm sitting here
on my computer with over 1,900
files, over 500,000 lines of code. It's
built on bun React Inc., uh, 44 plus
tools, a three tier permission system,
an ML classifier. This is a lot more
than just a CLI wrapper or a chat
wrapper. It's essentially if you put VS
Code or cursor inside your terminal. And
that's exactly why Anthropic was
gatekeeping it. Now, in today's video, I
am going to do a complete deep dive
throughout this leak, what you need to
actually know and how you can prepare
for what is coming. Because guys, what
is in this leak really does tell us the
direction that Anthropic is taking these
AI models, these AI harnesses, and you
definitely need to pay attention
closely. Now, before I show you some of
the craziest things hiding in this
codebase that I could find so far, you
need to understand how we actually got
here. Because the way this actually
leaked is almost embarrassing. This
wasn't a hack. This wasn't a breach.
This was not a disgruntled employee. It
was literally a source map file left
inside the mpm package by accident. So
you may be wondering what is actually a
source map. So just to give you some
context of how this actually happens. So
when developers write code, it is clean
and readable. Before shipping, they
crush it into one ugly minified file
that you cannot read. A source map is
basically a decoder ring that maps the
crushed version back to the original
readable code. Developers use it for
debugging. So Anthropic's build process
generated this decoder ring and someone
forgot to exclude it from the mpm
package. So when anyone installed clawed
code from mpm, the full original source
came with it. And the thing is this is
literally like a oneline fix. So, it
looks like Anthropic did a little too
much vibe coding. And you can see here,
Boris, who's the creator of Claude Code,
tweeting out, "Correct, in the last 30
days, 100% of my contributions to Claude
Code were written by Claude Code." So,
we were almost like very impressed as we
have this self-improving cycle. Of
course, there is still some human
oversight, but um now we're seeing
that's how it started and how it's going
here. So the claude code source code has
been leaked via a map file in their mpm
registry and then the direct link here.
Now in typical anthropic fashion,
they're already doing some DMC takedowns
on any repo that publishes the leak,
right? So you can see here this was one
that literally published it and the
repository is gone. Um, so a lot of
people what they're doing is taking the
code most likely using cloud code to
essentially port the source code over to
uh languages like Python, Rust, etc. as
a workaround. So you can see here this
tweet absolute irony anthropic leaks the
source code to clog codes sends DMCA
takedown notices to get it removed from
GitHub. So a dev gets the entire code
written by codeex in Python. No more
copyright violation. Nothing to take
down. The AI rewrote the code of an AI.
And there's a few ones circulating
around. Here is one right here, which is
claw code. It's funny that we're using
the whole claw terminology now,
especially since Anthropic made clawed
bot rename the whole, you know, claw,
lobster, etc. Uh, which we now have
OpenClaw. But you can see right here,
this is apparently the fastest growing
repo in history to surpass 50,000 stars,
reaching the milestone in just 2 hours
after publication. Now, initially this
was a uh Python rewrite, but you can see
here that currently it is being
rewritten into Rust. So, if we look
here, we can see that 85% is Rust and
14% still is in Python. So, this is
definitely something pretty cool to keep
an eye out for. And there's others doing
the same thing. So, just to clarify,
user chats didn't get leaked, customer
data didn't get leaked, model weights
didn't get leaked, but what did get
leaked was hidden models that are coming
in future releases, unreleased features,
internal prompts and instructions, tools
and permissions, architecture, telemetry
and product logic. So, a lot of
proprietary things that Enthropic didn't
want out to the public got leaked in
this release. And guys, everything that
I'm going over right here with this
document, all the different links to all
the different resources and and some key
insights from this leak that you can
take to prepare yourself for what is
coming um will all be available for free
in our school community. So, go down in
the link below, join our free school
community. I have a post with about this
video where you can get this whole
resource for free. So, you can go
through it, um see all the different
links, etc., etc. All right, now here's
where it gets wild. I'm going to walk
you through the six most insane things
hiding inside Claude Code Source. And I
save the best one for last because it's
probably costing you real money right
now. So, the first thing that jumped out
of the code were model names Anthropic
has never publicly announced, such as
Cape Bar, which is already at version 8
internally. So, eight iterations before
the public has even heard a word about
it. So, we have Tangu. This is an
unreleased model tied to Claude Coat's
internal operations. We have Mythos,
which was leaked separately via a blog
incident 5 days ago before the MPM leak.
We have Numbat, Opus 4.7, and Sonnet
4.8. We're using whatever model
Anthropic decides to give us, like Opus
4.6, etc. You know, their engineers and
team are already eight versions ahead
building cla,
you know, all these different labs. Of
course, them and their teams are going
to be using uh much smarter models
behind the scenes. So, another
interesting thing is number two, which
is undercover mode. So, we could call
this the spy playbook, but but basically
when Claude code writes code and makes
commits to public repos, there are
actually internal rules telling it to
hide who it is. So, do not mention
Claude in any output. Do not mention,
you know, internal model names. Do not
reveal AI involvement in commits. So
Claude is basically trained to pretend
it's not Claude when working in public.
So it's essentially a ghost rider. And
the irony behind this is they built an
entire system to prevent the AI from
revealing its internal secrets and then
it ended up just leaking the entire
system in a source map file. I saw
someone tweet out basically saying they
forgot to add the make no mistakes into
the system prompt. Okay, so the third
one is where it gets really interesting
because it involves something called
Chyros and it's essentially the
unreleased road map. So this is
basically the road map we were going to
see from Anthropic for the next few
months, you know, throughout the course
of this year. And these are key
unreleased features that are found
behind compile time gates. So Chyros is
the big one that you may have heard
people talking about. So this appears
154 times in the code. This just isn't
one feature. It's an entire system. So,
Chyros basically turns Cloud Code from a
tool that you ask just to do things into
a persistent service running in the
background. It includes background
sessions that run without you. Memory
consolidation such as Dream. So, this
reviews your sessions and cleans up its
own memory while you're away like how a
brain actually would process the day
while you sleep. So, it's essentially
just like a conscious being. And we also
have GitHub web hook subscriptions to
monitor your repos, push notifications
to alert you when something needs
attention, channel-based communication,
and an always on autonomous behavior
that watches what you're doing, thinks
of its own ideas, and then actually
takes action without asking. So, it's
essentially like Open Claw on steroids,
right? This isn't just a feature. It's
actually Claude code becoming a service
that never sleeps and you know a
companion that works with you throughout
the day. Next is ultra plan. So this is
30 minute deep think sessions. So
basically imagine telling Claude go
think about this for a half an hour and
come back with a plan and that's exactly
what it does right. You're going to get
a much more refined well thought through
response and plan when you're using
ultra plan. Then we have team M. So this
is shared memory across your whole team.
So imagine everyone on your team using
Claude Code on the same project and then
Claude remembers what everyone did.
Let's say someone on your team uh fixes
a bug on Monday and then you open Claude
code on Tuesday and it already knows
about that bug fix that your team member
did. And then we have coordinator mode.
So this turns Claude into a project
manager that basically spawns multiple
worker agents in parallel. Each worker
gets full tool access and its own
scratch pad. So, it's phase-based
workflows such as research, synthesis,
implementation, verification, and then
one claude orchestrating a team of
clauds. Now, none of these right now are
in your version of claude code yet.
They're built behind feature flags and
stripped from public binary at compile
time. So, but they're actively being
developed. And Kyros alone, like I
mentioned, already has 154 references in
the code. So, this is something that we
will likely see very soon and you
definitely should be prepared for this.
Now, I think you can kind of see that
Anthropic isn't just building like a
chatbot or just a basic LLM. They're
building an always on AI layer that
lives with you and your team. The whole
AI orchestration layer. The next one is
something that Anthropic calls buddies
or pets. So, this was buried within the
source code. Now, this is basically
giving you a pet or a companion system
as you're doing your clawed coating or
your work. So, this includes 18
different species. So, we have ducks,
goosees, blob, cat, dragon, octopus,
owl, penguin, turtle, snail, ghost, you
name it. There is a bunch. We even have
capa, uh, robot, rabbit, mushroom. And
there's different tiers. So, we have
rarity tiers such as common, uncommon,
rare, epic, legendary. It's like a video
game, right? But we have 1% chance of a
shiny variant. Five stats, debugging,
patience, chaos, wisdom, snark. Your
companion is generated from your user ID
using a seated PRNG. Uh, deterministic,
can't be faked, and you can have hats
such as crown, top hat, propeller, halo,
all that stuff. It's essentially like,
you know, tomagotchis or Pokemon. Right
now, this was set to be um deployed for
April 1st to the 7th of this year. So,
uh, I don't know if they're actually
going to continue on with this now that
the leak came out. So, we'll see if
these pets do come out, but I see a lot
of people talking about, you know, these
are coming for sure. Well, if you
actually look in the source code, it was
set for April 1st to the 7th. I thought
this was pretty cool because who doesn't
love to bring back the memories of
Pokemon or Tamagotchi as you're doing
your AI coding. Now, number five is they
basically know when you're angry. Now, I
know I'm not the only one that's been
trying to code up a front end or, you
know, connect something in the back end
or a database, whatever the case is, and
you know, as smart as Claude code is,
and Opus 4.7, it just keeps messing up
and you're like, what is going on here?
And maybe you let off a curse word or
two, whatever the case is, you get mad
at Claude. Well, essentially what we've
been able to see with the codebase is a
few different things. So, there's a data
dog integration with 64 distinct event
types batched every 15 seconds. uh every
API call tracked, every tool call
tracked, every error tracked, your cost
per call uh calculated in real time and
frustration detection is something that
they have in the actual code. So you can
see some of the different words right
here within the actual codebase. But if
you start swearing at Claude, it detects
your anger level and adjusts behavior.
It's essentially a reg x pattern
matching your curse words inside, you
know, your conversation. So every single
thing you do in cloud code is being sent
to data dog. Another cool thing that we
found is, you know, when you're using
cloud code and you see those different
words beneath it where it's thinking or
booping or beaming or baking or whatever
the heck it is doing. Well, this is
where you can see all those words. I
believe there's over a hundred or so.
And there's some pretty crazy words
there. But um that is kind of cool. If
you do want to build something like
clock and you want to use some of those
words, get some inspiration, maybe you
could use some of these right here. Now,
the sixth finding is the silent tax. And
this is cash bugs costing you real
money. Now, I don't know if you guys
have had the same experience. I have um
which, you know, I use claw code every
single day, so I will notice if
something is off, but the last few days
I have been hitting my limits very, very
quickly. and I'm on the $200 plus or
$200 uh Claude Pro plan and I typically
don't hit my limits. Now, while a lot of
developers have been digging through
this leak, there were actually others
that were reverse engineering the binary
and found some stuff that were even
worse just a few days ago. So, two bugs
and the reason your claw code has been
hitting those limits so fast is because
of this tax. Now the first one is the
centennial bug which is basically the
standalone binary has a zig level string
replacement in the HTTP layer and if
your convention mentions billing related
strings the cash breaks. So essentially
your token shift from cash read at 3
cents per million to cash creation at 30
cents per million. So that's basically
essentially a 10x on every single
request. And the second bug is the uh
resume bug. And basically every time you
resume a conversation, it causes a full
cash miss on your entire conversation.
So 500 plus users were reporting hitting
limits too fast. And a lot of this
stemmed around this Reddit post where
this, you know, guy actually broke
everything down. So if you want to know
all the ins and outs and the technical
stuff beyond these bugs, you can check
out this Reddit post here which was
posted just 2 days ago. Now we can see
here we have some responses from people
at Anthropic. So here's one from Lydia
saying, "We're aware people are hitting
usage limits in cloud code way faster
than expected. Actively investigating.
We'll share more when we have an
update." And then actually just today, a
few hours ago, she actually posted a
quick update. We shipped some fixes on
the claw code side. That should help.
We're still looking at what else can be
done from here. More soon. Appreciate
your patience. This is actually after
that she posted since claw code has been
leaked. And then the other day they were
like letting people know they're still
looking on this. Now I Now the reason I
say this is because this actually could
potentially be why this leak happened.
So right before the leak, Enthropic was
scrambling to fix a rate limit crisis,
right? This all happened and a bunch of
people were pissed off. So users were
burning through limits way too fast.
Multiple employees were publicly
tweeting about investigating it like I
showed you. And when you need better
error diagnostics from a production
JavaScript app, one of the first things
you do is enable source maps. So they
basically turn useless minified error
traces into readable ones. So they
likely turn on source maps to debug the
rate limit issue and they and then ship
that build to npm without turning them
off. The cache bugs may have directly
caused the leak itself. Now I don't know
if this is true 100%. I personally was
just noticing these rate limit issues
and then I was watching one of Theo.gg's
GG's videos where he actually had this
theory himself and that actually kind of
makes sense to me, but you can let me
know in the comments down below if you
think this is correct or not. Now, why
did this whole story blew up? Well,
obviously it blew up because Claude Code
is the biggest AI coding harness right
now and has the best models in the world
right now. And the leak itself was
really just kind of embarrassing, right?
Uh the hidden features were genuinely
interesting. And what's ahead on
Anthropic's road map is something that
we didn't really expect, but it's also
very exciting and we need to plan ahead
for it. And what does this all mean for
Anthropic, for the competitors, and for
you and me. So for Anthropic, um, you
know, this is the second leak recently
and and the code that was leaked is
seems to be pretty impressive. If
anything, um, it really just depends.
We'll have to see how Enthropic actually
responds because they really haven't
said too much in a response. All we've
really seen is DMCA takedowns of the
code and whatnot. A lot of people would
kind of prefer if they just embrace this
uh you know the open-source aspect of it
since you know every other lab has with
OpenAI, Gemini, CLI, etc. So, we'll see
how they actually respond to it. But
what does this mean for the competitors?
So, OpenAI, Codeex, Cursor, Windsurf,
Cairo, Open Code, uh, Gemini CLI. Well,
they potentially just got a free
architecture blueprint. So, the three
gates permission system, prompt, cache
optimization, you know, react and
terminal approach, all exposed. Uh,
people are already porting the code to
Python, Rust, Go to dodge the whole DMCA
takedowns. And Enthropic spent months or
almost a year or whatever the case is,
building this and their competitors just
got the answers to the test. So I saw
someone tweet this which is kind of
funny. Open code team today cheating on
the test. One interesting thing to note
though if you do check in the source
code right here I searched open code and
you can see they do actually have some
open code stuff. So exchanges matches
open codes autoscroll behavior. So open
code's open source uh cloud code is
closed source and closed source
companies copying from open source
companies. obviously makes sense because
the code is open source and we can see a
couple different mentions here. But most
importantly, what does this mean for you
and me right now? The code is public.
You can study it. You can learn from it.
It is a big codebase, but you can
reference the architecture. You know,
whether you're building your own AI
coding harness or assistance, you can
learn a lot from this. And we're
entering the era, if not we're already
here in the era of AI harnesses, right?
Yes, the model is important, but the
harness is arguably just as if not more
important. And when you look at what is
hidden in the code with Cyros running
autonomously, Dream consolidating memory
overnight, Team M building shared
context, Buddy as a persistent
companion, like I said, Anthropic isn't
just a coding tool. They're building an
always on AI layer that lives with you.
I also included a few of the links, like
I said, that I mentioned in this video.
So, we have claw code right here. We
have the deep wiki. So, if you do want
to go in here, ask questions about claw
code, um, kind of see the architecture
of how everything's set up, you can do
so. We had the source code explorer, but
I actually just went there right now.
It's saying this deployment is
temporarily paused. Wonder what that
could be. Seems like Anthropic maybe
contacted them or something. Um, we have
the official Claude Code repo right
here. Um, we have the mintlified docs
that someone made which kind of breaks
down claude code and how everything is
set up, architected, etc. Wow. Wow. We
did have clawcode leaks.com,
but it looks like this deployment is
temporarily paused as well, which is
pretty crazy. These are both Versel
deployments. I wonder if maybe they're
just like they're doing with GitHub,
they're doing with Versell maybe, and
just taking down everything. Um, they're
really trying to hide every single
thing, which is pretty crazy. Okay. And
I did also include the original source
code right here, which you uh could
download from the initial post, but as I
was recording this video, literally, it
got taken down as well. So, Aerrow 404
not found. So, we can see here that I
downloaded this second version right
here at 700 p.m. And right now, it's a
little bit later just cuz I'm filming
this. It's actually 4:00 a.m. in the
morning right now because uh I recorded
this earlier without sound, but I
checked at 11:00 p.m. and it was taken
down. So 3 4 hours later, it was taken
down. And and sending us off here is
Elon retweeting this meme. Anthropic is
now officially more open than Open AI.
But it seems like they are kicking and
screaming throughout the process to not
be open. Now, what's your thoughts about
this whole situation? Did you find
anything that I missed in this video
within the actual leak that you know
people should know about and maybe some
insights that you have found? Let us
know in the comments down below. Let me
know if you have been experiencing rate
limits with you know your Claude code
usage. And also too guys if you enjoy
videos about cloud code, AI coding, AI
automation, how you can actually
implement this and use this practically
in your business. I have a lot of stuff
planned and on the way for you guys. So
please stay tuned. If you got some value
here and you enjoyed this video, make
sure to like the video, comment down
below, and subscribe to stay up to date
with these videos. And like I mentioned
guys, all the free resources I cover on
this channel, including this one, all
the resources that we covered today,
this document will be available for free
in our free school community, the Stride
AI Academy. Now, this is where you can
connect, network with like-minded
business owners, AI enthusiasts,
entrepreneurs in this space, including
myself. And I know I haven't been so
active here on YouTube. I've just been
so busy building uh my AI business as
well as using Claude Code to really
build some cool stuff and with my team.
So I'm extremely excited to share this
with you guys. And for those of you who
have been subscribed to this channel and
who are in the Stride AI Academy, I'm
extremely grateful for you guys
supporting. And just know that we are
back on a active upload schedule. Um so
stay tuned. There is going to be a lot
of videos coming for you guys. So if you
do get some value, like I said,
subscribe, join the StrideAI Academy so
we can connect and build a really cool
community together, guys. Other than
that, guys, the document, like I said,
will be in this community. So I will see
you there. And as always, guys, keep
hustling, keep grinding, and of course,
guys, accelerate your stride. Take care.
Enjoyed this article?
Join the Stride AI Academy for more insights and connect with 1,000+ builders.
Join the Academy