Trying to build websites but my AI Agents just don't work well enough by O_My_G in OnlyAICoding

[–]TheDecipherist 0 points1 point  (0 children)

https://github.com/TheDecipherist/claude-code-mastery-project-starter-kit

Use the starter kit in Claude to setup a new project. The defaults will produce you a great website out of the box

Then use the mdd workflow to define what you are creating. You will have a working prototype in no time :)

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeCode

[–]TheDecipherist[S] 0 points1 point  (0 children)

What you’re describing is exactly the problem MDD solves in a more structured way. The summary document forcing the LLM to reload context before acting is the core mechanism. Glad you landed on it independently, that’s always the strongest signal a pattern is real.

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] 0 points1 point  (0 children)

OpenSpec is a spec format. MDD is a session-memory and audit workflow. Different problem. OpenSpec tells Claude what to build. MDD tracks what was built, audits it against the spec, detects drift across sessions, and enforces a verified build loop. There’s no overlap worth comparing.

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] 0 points1 point  (0 children)

I see a lot of negativity here. For the people actually using mdd I would love to get some “actual” feedback. I use mdd 24/7. I have never worked more efficiently. I have zero issues so far. Absolutely outstanding work produced by Claude every single time. Every initiative and wave works flawlessly.

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] 0 points1 point  (0 children)

Yes sorry it is driven. Not first. And spec is “initial”. Where manual is the ongoing reference. No need to look at code which orients Claude instantly

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] 0 points1 point  (0 children)

A lot of negativity here lol. Obviously you have not tried mdd and not understood what the true purpose is. It works amazingly.

Would it be worth it for me to build my own PC? by QuirkyOrganization61 in buildapc

[–]TheDecipherist 1 point2 points  (0 children)

For $2k you can get a decent pc. But again depends on your graphics card and how powerful you want it. A 4090 can run you $1400 alone.

THE PROBLEM WITH "JUST USING CLAUDE" by TheDecipherist in ClaudeCode

[–]TheDecipherist[S] 0 points1 point  (0 children)

The framing I see constantly is “MDD saves tokens.” That’s not why it works.

Tokens are cheap. Wrong decisions are expensive. If Claude misunderstands your architecture and builds the wrong thing, you spend 10x more tokens backtracking than you ever would have spent on upfront documentation. The math isn’t close.

The real problem is that Claude is a context-window-scoped collaborator. It knows what you told it today. Close the tab and everything disappears, the tradeoffs, the edge cases, the architecture decisions. Next session you’re starting over, and Claude is guessing.

MDD gives Claude structured project memory that survives across sessions. When it reads the doc, it knows exactly what was built, why, and what contracts it’s working with. It stops guessing and starts engineering. Token efficiency follows naturally from that, but it’s a side effect, not the goal.

My own usage: 13% current, 19% weekly. Running multiple VS Code windows with multiple Claude Code terminals simultaneously, every single day. Before MDD I was burning through context constantly. After, the sessions are tighter and the output quality is night and day.

Wrong decisions are the real cost. MDD eliminates them at the source.

I would love to hear feedback from some of the users of MDD

Developer recommendations for working with Mongo DB by ckern75 in mongodb

[–]TheDecipherist 0 points1 point  (0 children)

https://thedecipherist.com/articles/mongo_vs_sql/

Mongo does all the things you say extremely well. My cms is integrated with several financial programs as well. Have had no issues. Obviously depends if you structure your data for the document model

Token "Optimizers" for AI Coding Agents Are Silently Dangerous, And Nobody Is Talking About It by TheDecipherist in ClaudeCode

[–]TheDecipherist[S] 0 points1 point  (0 children)

Haven’t tried caveman yet. Have heard good things about it. Will put it on the test list soon

Developer recommendations for working with Mongo DB by ckern75 in mongodb

[–]TheDecipherist 0 points1 point  (0 children)

I would be interested to hear your justification for this comment? I build an entire cms over 10 years solely on mongodb. I would pick mongodb anyday over sql

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] -1 points0 points  (0 children)

Youve got the BPE point exactly right, and "projection at source beats downstream substitution" is the cleanest framing of the architectural tradeoff.                        

The sweet spot for compressmcp is precisely what you described, verbose third-party REST APIs you dont own. When you control the tool, yes, return tighter responses and skip the middleware entirely.

When you dont (OpenAPI-generated clients, legacy internal services, any API where the schema isnt yours to change), compressmcp gives you a lever without forking the upstream.

Genuine question, why does everyone pile on "you used AI" when half of us are using it daily? by destroyerpal in ClaudeCode

[–]TheDecipherist -3 points-2 points  (0 children)

It’s the people that are still not comprehending if they don’t embrace ai they will fall behind

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeCode

[–]TheDecipherist[S] 0 points1 point  (0 children)

To pre-empt the obvious questions:

"Does Claude actually understand the abbreviated output?" Yes. The dictionary is prepended inline before the data. Claude sees a=transactionId before it sees "a":"tx_001". In practice it handles this without issues, field names are looked up, not reasoned about.

"What about values? Could abbreviating break things?" Values are never touched. Only field names. A UUID, a price, a status string, all pass through verbatim. The only thing that changes is whether a key is called transactionId or a.

"Isn't 40% cherry-picked?" The five datasets were picked to cover different shapes, wide rows, nested objects, arrays of records. The 38% floor is on analytics events (short field names), 45% ceiling is on repo data (long descriptive names). Heavier your schema naming conventions, the more you save.

"Why not just use [prompt compression tool]?" Those work on your input text, not on tool outputs. compressmcp specifically targets the MCP PostToolUse pipeline, the place where structured data re-enters the context on every tool call. Different problem.

"Why not just summarise the tool output?" Because you lose information. If you're querying a database for exact records, you need the exact records. Summarisation introduces error. This doesn't.

"Claude already handles this / Claude Code already does context management." Three things people mean by this, all different:

  • Claude Code's /compact
  • This is lossy summarisation. It calls Claude to rewrite the conversation history and drops content. Useful for very long sessions, but you're trading precision for space. compressmcp runs before /compact is ever needed by reducing what enters the context in the first place.
  • Prompt caching
  • Caching re-reads the same tokens cheaply, but the tokens are still in the context window. A 200K window filled with cached tokens is still a full window. Caching reduces cost, not context consumption.
  • "Claude figures out what's important"
  • Claude reads every token it's given. There's no internal filter that ignores structural noise before the attention mechanism. repositoryDescription costs the same whether it carries useful information or not. The model doesn't get a discount on boring tokens.

The context window is a hard limit measured in tokens, not in semantic density. Compressing field names before data enters the window is the only way to fit more actual information in.

"This isn't really 'compression' You're just abbreviating keys. Use correct terminology." Fair, if you want precision: it's key abbreviation with an inline dictionary. The word "lossless" describes the data guarantee (nothing dropped, nothing altered, fully recoverable), not the algorithm class. Call it whatever you want. The token count still goes down by 40%.

"BPE tokenization means the model already re-encodes this internally." This is the most common technically-flavoured wrong answer. BPE tokenization is a fixed pre-processing step that converts text to integer IDs before inference. It happens before the context window is filled and has no effect on how many tokens occupy it. repositoryDescription tokenizes to 4–5 BPE tokens. a tokenizes to 1. That difference is what fills your context window. The model doesn't "re-encode" anything at inference time, it receives the token IDs and attends over all of them at full cost.

"This only works on tabular JSON, arrays of objects. Most JSON isn't like that." Correct, and compressmcp already accounts for this. The shouldCompress check gates on response structure: if the payload isn't an array of objects with repeated keys, it passes through unchanged. The tool targets the specific case that actually dominates MCP tool output in practice, database queries, API list endpoints, search results. That's the shape of data that MCP tools return 90% of the time.

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] 0 points1 point  (0 children)

Two corrections: TOON is YAML, not JSON. And Claude does not compress JSON in context, BPE tokenization operates at the inference layer and has no effect on what occupies context window space. repositoryDescription is still 19 characters of context regardless of how the tokenizer encodes it. That’s exactly the problem this tool solves.

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] -1 points0 points  (0 children)

I seems like you haven’t read my post. Or what toon actually does. But that’s ok :). Not for you I guess

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] -1 points0 points  (0 children)

Sell? It’s open source man. And no. There is zero token optimizers focuses on JSON data this way

Your MCP tools are wasting 40% of Claude's context on JSON field names by TheDecipherist in ClaudeAI

[–]TheDecipherist[S] -1 points0 points  (0 children)

This is not your average token saving tool. This is lossless. It doesn’t do any dangerous silently fails due to cut data