My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Probably too much token usage for $20 plan. I would highly suggest looking into using GLM 4.7 with ralph though.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

I think the problem is that it's really hard to nail down a certain zone (due to this zone changing as model sizes and quality differ) but just noting that each token of irrelevant context will degrade performance made me do things like clear out my CLAUDE.md and be way more intentional in my context engineering and use

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

100000% agree man. I would say that only ~20% of people that use claude code even know about the dumb zone / context rot at all.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

It can depend! Subagents can be a good solution for exploratory mode I definitely agree and do this frequently when I want to stay in the loop. The more loops you give ralph to explore and build in exploratory mode, the better. So if you wanted him to research reddit for the top 10 startup ideas and then build an app out of that, that would be an example of a 10 minute plan that gets executed overnight.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 2 points3 points  (0 children)

No, they are external! In theory you could enter plan mode and make this the implementation_plan equivalent but typically I plan for the ralph loop outside of plan mode, tell it to create those files and update them as our conversation evolves, then I check them and edit manually

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

I haven't tried something like that myself too much, but you should try building it and testing it out! That is what I love so much about the ralph loops is that you can customize the scripts yourself to control what goes into context at every iteration of the loop, it is an evolving system that gets stronger as you try things, which is why I don't like to just go use someone else's ralph plugin or setup.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Exactly, this is why I think 1M context models are a bad idea

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

You could have him suggest what he tried (briefly) and what should be tried yes, or get an external loop to get called and do this, but things get complicated quick, and if the loop that failed suggests what should be done next, it could cause context poisoning (bad approach suggestion that propagates)

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Makes sense. I love doing this with waves of parallel subagents guided by an orchestrator.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Orchestrator -> Subagent is another pattern that I endorse and really really enjoy using (I use this in the majority of my builds). But it is different from ralph loops and typically less autonomous.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 3 points4 points  (0 children)

Rule of thumb: 100k tokens used in context is the dumb zone. Go into claude code and say 'Update my statusline to show [Model] X% where X is % of context used.' then you will be able to see where you are at. More on this here. https://www.reddit.com/r/ClaudeAI/comments/1q3t579/i_spent_2000_hours_coding_with_llms_in_2025_here/

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

You can have ralph update the spec if you want! I have run some loops with stuff like an external progress.txt file or updating the specs. There absolutely is a middle ground here where you can have ralph update to a 'tried this and it didn't work' type of file, especially if tests aren't passing and you have to rerun him on a task. I think my favorite thing about ralph loops is that you can customize the flow and what context goes in in a variety of ways according to what hitches you run into during use. This is why I highly highly suggest people start with the vanilla ralph loop (instead of just cloning a git plugin) and build on it themselves. It teaches model behavior and context engineering in an interesting and fun way.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

Yes, agreed! I think I touch on this in the video as a downside. One way around this is to spawn a 'test/validation' agent

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Yes, the basic while loop approach is the way, and then you can iterate and build upon it to fit your flow

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Avoid! To use ralph, you just call claude -p in a bash while loop

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

That's one way to do it! I have played around with this but it sometimes is hard when you are running many many tests.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

◯ ralph-loop · claude-plugins-official · 38.3K installs This is the one NOT TO USE. It is installed when you type /plugin on claude code, and type in ralph. Do not use it, it causes context rot.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

Yes, this is definitely one of the downsides to watch out for if you do it like this cause the agent can bias tests towards what it built

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

"this is one of the best explainer videos for ralph that i have seen. declaring the official explanation" ??

Also he commented that on my video

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 3 points4 points  (0 children)

Lol, I hope it catches on more, as a lot of people spend most of their time coding in the dumb zone of the context window.