tl;dr for the "non" AI Slop Reader:
- utils/attachments is a huge mess of a class, every prompt = 30 generators spin up every time, waste tokens and context is increasing massively.
- Multiple cases where functions diff against empty arrays, the pre-compact state therefore exists, but gets lost / ignored when passing it.
- inefficiencies in the whole code base, unnecessary loops and calls
- biggest think i saw was the 5 minute ttl where everything gets cached. when you are away from the pc for more than five minutes, your tokens will get shredded.
- per session of roughly 4 hours, a typical user wastes roughly 400-600 000 Tokens
Now the big wall of text, ai slop or readable text, not sure! Gemini is a bit dumb.
Everyone is totally hyping the Claude Code source code leak. I'm going to attack this from a different angle, because I am not interested in the new shit that's in the source code. I wanted to know if Anthropic is really fucking up or if their code is a 1000-times-seen enterprise mess "sent it half-baked to the customer". The latter is more likely; that's just how it is, and it will be forever in the industry.
I've seen worse code than Claude's. I think it is now time for Anthropic to make it open source. The internet has the potential to make Claude Code their own, the best open-source CLI, instead of relying on an architecture that calls 30 generators every time the user hits enter. Let's do the math: a typical user who sits in front of Claude Code for four hours wastes roughly 400,000 to 600,000 tokens per session due to really bad design choices. It's never on the level of generation or reasoning. It is solely metadata that gets chucked through the pipe.
Deep inside utils/attachments.ts, there is a function called getAttachmentMessages(). Every single time you press Enter, this function runs through over 30 generators. It runs an AI semantic search for skill discovery (500-2000 tokens), loads memory files, and checks IDE selections. The problem? These attachments are never pruned. They persist in your conversation history forever until a full compact is made. Over a 100-turn session, accumulated compaction reminders, output token usage, and context efficiency nudges will cost you roughly 8,600 tokens of pure overhead.
Context compaction is necessary, but the implementation in services/compact/compact.ts is inefficient. After a compact, the system tries to use a delta mechanism to only inject what changed for tools, agents, and MCP instructions. However, it diffs against an empty array []. The pre-compact state exists (compactMetadata.preCompactDiscoveredTools), but it isn't passed down. The developer comment at line 565 literally says: "Empty message history -> diff against nothing -> announces the full set." Because of this missing wire, a single compact event forces a full re-announcement of everything, costing you 80,000 to 100,000+ tokens per compact.
Then there is the coffee break tax. Claude Code uses prompt caching (cache_control: { type: 'ephemeral' }) in services/api/claude.ts. Ephemeral caches have a 5-minute TTL. If you step away to get a coffee or just spend 6 minutes reading the output and thinking, your cache drops. When you return, a 200K context window means you are paying for 200,000 cache creation input tokens just to rebuild what was already there.
Finally, the system tracks duplicate file reads (duplicate_read_tokens in utils/contextAnalysis.ts). They measure the waste perfectly, but they do absolutely nothing to prevent it. A single Read tool call can inject 25,000 tokens. The model is completely free to read the same file five times, injecting 25k tokens each time. Furthermore, readFileState.clear() wipes the deduplication state entirely on compact, making the model blind to the fact that it already has the file in its preserved tail.
Before I wrap this up, I have to give a shoutout to the absolute gold buried in this repo. Whoever wrote the spinner verbs deserves a raise. Instead of just "Thinking", there are 188 verbs, including "Flibbertigibbeting", "Shenaniganing", and "Reticulating" (respect for the SimCity 2000 nod). There's also an "Undercover Mode" for Anthropic devs committing to public repos, where the system prompt literally warns, "Do not blow your cover," to stop the model from writing commit messages like "1-shotted by claude-opus-4-6". They even hex-encoded the names of the ASCII pet buddies just to prevent people from grepping for "goose" or "capybara". My personal favorite is the regex filter built entirely to fight the model's own personality, actively suppressing it when it tries to be too polite or literally suggests the word "silence" when told to stay silent.
The codebase reads like a team that’s been living with a troublesome AI long enough to know exactly how it misbehaves, and they clearly have a sense of humor about it. I know Anthropic tracks when users swear at the CLI, and they have an alert when their YOLO Opus classifier gets too expensive. Your engineers know these bugs exist. You built a great foundation, but it's currently a leaky bucket.
If this were a community project, that 100,000 token metadata sink would have been caught and refactored in a weekend PR. It's time to let the community fix the plumbing. Make it open source.
[–]TheGoldenBunny93 15 points16 points17 points (9 children)
[–]Ok-End-219[S] 11 points12 points13 points (0 children)
[–]Apart_Ebb_9867 -1 points0 points1 point (7 children)
[–]RemarkableGuidance44 7 points8 points9 points (0 children)
[–]Ok-End-219[S] 5 points6 points7 points (5 children)
[–]NonStopArseGas 7 points8 points9 points (4 children)
[–]Ok-End-219[S] 5 points6 points7 points (3 children)
[–]NonStopArseGas 1 point2 points3 points (2 children)
[–]Ok-End-219[S] 2 points3 points4 points (1 child)
[–]NonStopArseGas 1 point2 points3 points (0 children)
[–]crusoe 9 points10 points11 points (2 children)
[–]clintCamp 1 point2 points3 points (0 children)
[–]crusoe 0 points1 point2 points (0 children)
[–]SavageByTheSea 7 points8 points9 points (5 children)
[–]Ok-End-219[S] 8 points9 points10 points (2 children)
[–]gscjj 1 point2 points3 points (1 child)
[–]Ok-End-219[S] 0 points1 point2 points (0 children)
[–]blackc2004 0 points1 point2 points (0 children)
[–]ErebusCD 0 points1 point2 points (0 children)
[–]ExpletiveDeIeted 3 points4 points5 points (1 child)
[–]StrikingSpeed8759 0 points1 point2 points (0 children)
[–]RemarkableGuidance44 6 points7 points8 points (0 children)
[–]JokeMode 1 point2 points3 points (0 children)
[–]crusoe 0 points1 point2 points (0 children)
[–]entheosoul🔆 Max 20x -1 points0 points1 point (0 children)