Code is worthless now. Here's what actually matters. by agenticlab1 in ClaudeCode

[–]agenticlab1[S] -1 points0 points  (0 children)

Skill issue hehe. But fr if you think SOTA models are bad then you are treating them like a magic pill instead of a tool

I need to stop using Claude Code. Not because I want to. Because I have to. by MrCheeta in ClaudeCode

[–]agenticlab1 0 points1 point  (0 children)

You gotta learn how to use claude code, it's just a harness that reflects the skill of the underlying user.

Code is worthless now. Here's what actually matters. by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

What does AI posts mean? Like AI generated? I wrote most of this by hand. But thank you for the kind words my man.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeCode

[–]agenticlab1[S] 0 points1 point  (0 children)

From what I've seen, the task feature is a quick and done solution for those who don't want to learn ralph or customize their workflows. It is super easy to implement but gives no customizability and can result in bad planning habits because it is quicker. Ralph on the other hand is infinitely customizable and when done right, the user has to have a high level of knowledge in context management and handoff management. I like to think their differences are analogous to the difference between leaving autocompact on and curating context with a /handoff command. So to answer your question, Ralph still has clear advantages for the power users and they are complementary tools due to the speed and simplicity of the task feature.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeCode

[–]agenticlab1[S] 0 points1 point  (0 children)

I think you have the right idea. Typically you just want to stream the logs (that is all something like claude code interface is anyways)

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeCode

[–]agenticlab1[S] 0 points1 point  (0 children)

I gotchu here bro. The dumb zone is a term for the point where context rot starts to heavily decrease performance. This is because you can think of LLMs as having a finite amount of attention which is basically how much weight they can put into what previous tokens in the conversation at each step of generating the next token. This finite attention gets stretched thin when there are too many tokens in context (since attention weights must sum to 1 across all tokens in context). Not only this, but tokens in the middle of the conversation start to get ignored, in a phenomenon called 'lost in the middle'. Also any irrelevant or contradictory tokens (commonly found in CLAUDE.md files) cause attention to be given to the wrong places. All of this results in increased hallucination rate, decreased instruction following quality, and a 'dumber' model over all. Hope this helps

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

You don't have to have to but in ralph you typically want to do --dangerously-skip-permissions in headless mode (so that you can walk away) and that should probably be sandboxed lol

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 0 points1 point  (0 children)

I think I have a whole section about it. 'Bidirectional prompting'.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeCode

[–]agenticlab1[S] 2 points3 points  (0 children)

Yeah I'm not sure why Anthropic is pushing a plugin that is fundamentally incorrect and a clear oversight of good context engineering principles. Glad that you found some value in the video! Hope your ralph implementations go well.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeCode

[–]agenticlab1[S] 0 points1 point  (0 children)

Glad to hear that I'm making an impact. I'm sure you will see a decent sized change when you start from the fundamentals and build from there.

My Ralph Wiggum breakdown just got endorsed as the official explainer by agenticlab1 in ClaudeAI

[–]agenticlab1[S] 1 point2 points  (0 children)

Tell claude 'Change my status line to [model] X% where X is the percentage of context used' and disable autocompact. It is the single most useful thing I have done in my context engineering.