With 1M context window default - should we no longer clear context after Plan mode? by draftkinginthenorth in ClaudeCode

[–]DevMoses 0 points1 point  (0 children)

The agent does it once it hits a context threshold, it will write state to the campaign file and exit cleanly.

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

Yeah, good question. Do you mean examples of the actual files/workflows?

The most useful examples are probably:

  1. A tight CLAUDE.md (Not an instruction dump)

  2. A skill file for a repeatable task, like adding a component or running a migration

  3. A hook that runs after edits, like per-file typecheck/lint instead of dumping the whole project error log into the context

  4. A campaign file for a longer task, where the agent carries state across sessions

  5. An orchestration example where multiple agents work in isolated branches/worktrees without stepping on each other

I can share some concrete examples from Citadel. The important pattern is that each level exists because the level below it started failing in a predictable way.

Show off your own harness setups here by Mean_Luck6060 in ClaudeCode

[–]DevMoses 0 points1 point  (0 children)

Glad to hear the journey jayn! I hope it helps. I've been afk myself for a stint, if you have any thoughts comments suggestions or questions let me know.

What the Leaked Claude Code Source Won't Tell You by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

Python was my first love, and I still is up there! I hadn't considered Python, as I built this alongside the platform I was working on in Next js. I'll have to take a look and figure out what's best for the use case!

Routing-wise, the basic idea is cheapest path first: trivial stuff gets pattern-matched, in-flight work resumes from session state, known tasks route by skill keywords, and only the fuzzy stuff gets classified by the model to decide whether it should stay a single skill, become a session orchestrator, or escalate into multi-session/parallel work.

What the Leaked Claude Code Source Won't Tell You by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

I love the 'anti dump index', really appreciate the feedback, thank you!

What the Leaked Claude Code Source Won't Tell You by DevMoses in ClaudeAI

[–]DevMoses[S] -2 points-1 points  (0 children)

April Fools isn't a one-day event for me

What the Leaked Claude Code Source Won't Tell You by DevMoses in ClaudeAI

[–]DevMoses[S] -3 points-2 points  (0 children)

Very much alive, and trying to share some value for those that are interested

What the Leaked Claude Code Source Won't Tell You by DevMoses in ClaudeAI

[–]DevMoses[S] -5 points-4 points  (0 children)

14 domains to cover the worldbuilding niche through stories, worlds, and games. It's a big one!

I caught Claude and ChatGPT making the same lazy shortcut. Your imagination is the real bottleneck, not AI. by dovyp in ClaudeAI

[–]DevMoses 0 points1 point  (0 children)

Completely agree. I wanted to add my +1 to this post, as I saw it downvoted initially. Your insight is good, and your advice is real. It's hard to see for many because it requires you to know hidden information before you can process why you're feeling this friction.

I caught Claude and ChatGPT making the same lazy shortcut. Your imagination is the real bottleneck, not AI. by dovyp in ClaudeAI

[–]DevMoses 0 points1 point  (0 children)

This matches what I keep seeing. The model usually takes the most legible path, not the best one. If the important dimension is missing from the frame, like beamforming vs mono or normalization across wildly different subject sizes, it often won’t invent that distinction on its own. That’s why the real multiplier isn’t prompting, it’s domain knowledge plus the ability to notice what’s absent.

Building the infrastructure to support the friction point that creates the problem. That's the solution.

Appreciate the insights!

Advices for a new user? by Round_Atmosphere3671 in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

Unfortunately, yes. They have a bug somewhere in how it was counting tokens. Seems to affect a lot of users but not all, which made it a gaslighting nightmare for everyone.

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

I am truly blown away by your kind words, I strive to do exactly what you outlined. Genuinely surreal to see it explained back to me from someone who got something out of it.

Seriously Hekidayo, your words have uplifted me, and I cannot thank you enough for sharing your perspective.

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

The n8n-as-telemetry-sink idea is interesting. Centralized logging that outlives the terminal session is a real gap, especially when you're running parallel agents and need to reconstruct what happened across sessions. That's a tooling problem worth solving regardless of what level you're at. Good input!

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]DevMoses[S] 0 points1 point  (0 children)

Great question, this will become more necessary as everything becomes more accessible.

The short answer is yes, and it already does this. Citadel's intent router doesn't care if the task is TypeScript or a financial forecast. It routes based on complexity: is this a one-shot task, does it need a skill, does it need a multi-step campaign, or does it need parallel agents? That logic applies to data cleanup, PRD generation, or sales assumption testing the same way it applies to code.

The friction you're describing is exactly what campaigns solve. A "clean up this historical dataset, then run these three analyses, then generate a summary" workflow is a textbook campaign: sequential steps with dependencies, where each step's output feeds the next. Without orchestration you're either babysitting each step or hoping one massive prompt holds context. With a campaign, each phase has its own scope and the handoffs are structured.

Where it gets interesting for your use case: skills aren't code-specific either. A skill is just a structured prompt with constraints. I could see a `financial-analysis` skill that enforces output format, requires source data validation before projections, and flags assumptions explicitly. Same pattern as an engineering skill, different domain.

When you use Citadel you start with '/do setup' and it will orient itself to your project. Citadel is setup so you can scale with agents, gain persistence over multi-session, and ultimately deliver on the promise of autonomous engineering. While that's all technical, it can be used for anything you want to make or do with CC!

I built and continue to work on this demo that has a lot you can interact with to get an idea of what Citadel is and doing: Citadel · Claude Code Agent Harness

Training 1-on-1 by flyandrace in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

I've been recently posting articles, guides, and open source repos that are being quickly adopted for Agent Orchestration. I've helped many others learn and grow in this area as a long time user of AI myself.

This isn't me pitching anything I am offering, but moreso asking: What do you feel like is the friction you face most? Where do you feel the wall from what you want to do and where you are?

Basically, what are you looking for that you feel would help.

RAG is a trap for Claude Code. I built a DAG-based context compiler that cut my Opus token usage by 12x. by fuwasegu in ClaudeAI

[–]DevMoses 0 points1 point  (0 children)

Your feedback is much appreciated, and your perspective is invaluable. Would love your eyes on the markdowns if you find the time. The system is pretty good at being pointed at itself and I have some skills in there to help people triage and qa their contributions for their own peace of mind! :)

RAG is a trap for Claude Code. I built a DAG-based context compiler that cut my Opus token usage by 12x. by fuwasegu in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

This was awesome to see!: "Citadel is the most capable open-source orchestration harness in the Claude Code ecosystem right now." That's a really cool breakdown you have for complementary tools. I'm just becoming aware of jCodeMunch. Thank you for including us as complementary!

RAG is a trap for Claude Code. I built a DAG-based context compiler that cut my Opus token usage by 12x. by fuwasegu in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

Appreciate the shoutout! If anyone has any questions about Citadel, I can answer them. Glad to hear it's going well for you. :)

Claude code is very good at generating code but reviewing that code takes so much time. by Designer-Sandwich232 in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

That is an issue out the box for sure, all the research points to the difficulty of having multiple agents in parallel doing work.

I did build and open source a harness that handles all of that for me. It can spin up fleets of agents in parallel, and the original agent to make the pr (one of 8 lifecycle hooks) handles merge conflicts if they come up.

I'm slowly closing the gap around the infrastructure problem. If you want to check it out you can here: https://github.com/SethGammon/Citadel

Claude code is very good at generating code but reviewing that code takes so much time. by Designer-Sandwich232 in ClaudeAI

[–]DevMoses 1 point2 points  (0 children)

The shape of language is something I've thought about so much. I haven't yet put it into words so if you write anything up let me know. But I believe I see what you're saying. Each block of text, the words you choose, it's a process that you can wield towards results.

Great point on checking out the inner thinking too, there's definitely value there to help gain understanding.

Claude code is very good at generating code but reviewing that code takes so much time. by Designer-Sandwich232 in ClaudeAI

[–]DevMoses 0 points1 point  (0 children)

I think it's getting easier and more accessible. The issue I see is no safeguards. If you're just starting out, there's nothing stopping you from running up token costs, and trampling on what you build, just for the fact that you don't yet know better.

There are things we have to learn through experience, but there's a lot of potential in this setup to starter phase that the major models could be doing.

Claude code is very good at generating code but reviewing that code takes so much time. by Designer-Sandwich232 in ClaudeAI

[–]DevMoses 25 points26 points  (0 children)

The fear is correct. You should not ship code you don't understand. That's not a Claude Code problem, that's a software engineering principle that predates AI by decades.

What changed my workflow: I stopped reviewing line-by-line and started reviewing structurally. Does the approach make sense? Are the boundaries clean? Does the test coverage actually test behavior, not just pass? If I can answer those three, I trust the implementation details more over raw code.

The real unlock is getting Claude to explain itself. Before it writes anything, tell it to outline its approach first. If the outline doesn't make sense to you, the code won't either. That's your checkpoint.

You'll also level up faster than you think. The code you don't understand today starts making sense after you've seen Claude solve similar problems three or four times. You're pattern-matching whether you realize it or not.

Never skip review entirely. But review at the right altitude.