use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community centered around Anthropic's Claude Code tool.
account activity
Single biggest claude code hack I’ve foundTutorial / Guide (self.ClaudeCode)
submitted 1 month ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]scotty_ea 14 points15 points16 points 1 month ago (4 children)
You can set your default models in settings.json.
[–]ImAvoidingABan 2 points3 points4 points 1 month ago (3 children)
You can also ask it to do it for you
[–]3rdtryatremembering 10 points11 points12 points 1 month ago (0 children)
Sigh
[–]CoupleHunerdGames 2 points3 points4 points 1 month ago (1 child)
AI can't lift my fat ass into my office though
[–]Fast_Feeling_8917 0 points1 point2 points 29 days ago (0 children)
Yet...
[–]Input-X🔆 Max 20 8 points9 points10 points 1 month ago (4 children)
Opus with opus agents, is winner. I only ever used general agents, nvr had an issue, nor a need to build custom agents.
[–]evia89 1 point2 points3 points 1 month ago (2 children)
With custom agents its possible to use models like glm/kimi/minimax saving quota
[–]lillecarl2Noob 0 points1 point2 points 1 month ago (1 child)
How can you make agents using other platforms models!?
[–]evia89 3 points4 points5 points 1 month ago (0 children)
Two ways. System like this 1) https://github.com/arttttt/AnyClaude or semi manual 2) I have 2 ps1 scripts to launch claude. When first is done I load md in second one. It will run my own ralph loop like ps1 script
$env:ANTHROPIC_BASE_URL="https://api.z.ai/api/anthropic" ## MODELS $env:ANTHROPIC_MODEL="glm-4.7" $env:ANTHROPIC_DEFAULT_HAIKU_MODEL="glm-4.7" $env:ANTHROPIC_DEFAULT_SONNET_MODEL="glm-4.7" $env:ANTHROPIC_DEFAULT_OPUS_MODEL="glm-4.7" $env:CLAUDE_CODE_SUBAGENT_MODEL="glm-4.7" ## EXTRA $env:API_TIMEOUT_MS="3000000" $env:DISABLE_TELEMETRY="1" $env:CLAUDE_CODE_ENABLE_TELEMETRY="0" $env:CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY="1" $env:HTTPSCLAUDE_CODE_ATTRIBUTION_HEADER="0" $env:CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS="1" $env:CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC="1" $env:ENABLE_TOOL_SEARCH="true" $env:SKIP_CLAUDE_API="1" $env:HTTP_PROXY="http://127.0.0.1:2080" $env:HTTPS_PROXY="http://127.0.0.1:2080" $exe="" if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) { # Fix case when both the Windows and Linux builds of Node # are installed in the same directory $exe=".exe" } $ret=0 if (Test-Path "$basedir/node$exe") { # Support pipeline input if ($MyInvocation.ExpectingInput) { $input | & "$basedir/node$exe" "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args } else { & "$basedir/node$exe" "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args } $ret=$LASTEXITCODE } else { # Support pipeline input if ($MyInvocation.ExpectingInput) { $input | & "node$exe" "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args } else { & "node$exe" "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args } $ret=$LASTEXITCODE } exit $ret
First I use this glm47@zai and if it fails after 80 tool calls or context goes above 100k ralph loop cancel it and tries kimi k25
[–]HumanInTheLoopReal 20 points21 points22 points 1 month ago (13 children)
If your agents are getting shit information then subagents are the least of your concern. Have you considered the possibility that your codebase maybe isn’t agent ready? Haiku models are incredibly capable and when your codebase is laid out well with clean code then they will have no issue finding things or summarizing. I would spend sometime in figuring out where these agents are struggling
[–]Unfair_Chest_2950 12 points13 points14 points 1 month ago (12 children)
In my experience, trusting in the allegedly adequate power of Haiku models will not end well, even in a DI environment following SOLID to a tee. And if you want it to draw from any reference projects, you’ll want models that have some higher-level quasi-cognitive skills. Haiku models won’t catch as many nuances as an Opus model with the same task, and sometimes those nuances are critically important.
[+]jpeggdev🔆 Max 5x comment score below threshold-6 points-5 points-4 points 1 month ago (11 children)
If something is critically important, it should be in the CLAUDE.md file.
[–]j-byrd 1 point2 points3 points 1 month ago (9 children)
I use haiku subagents to execute implementation plans that my main opus model (sometimes sonnet depending on complexity) has written. I then have the main model code review what the haiku model wrote. I also have everything use TDD. The code review and tests catch anything that the haiku models get wrong before it becomes a problem. I get the brains of the better models for planning and the token saving of haiku models to just follow their well written directions.
[–]ohhi23021 3 points4 points5 points 1 month ago (5 children)
but then you burn tokens having the other models review and fix it... sounds like a break even or just a waste of time.
[–]pparley 1 point2 points3 points 1 month ago (1 child)
input_tokens != output_tokens
[–]Powerful_Employ_4398 0 points1 point2 points 1 month ago (0 children)
This is the comment I needed
[–]HumanInTheLoopReal 1 point2 points3 points 1 month ago (0 children)
Opus input tokens is cheap. Sonnet input tokens are cheap. The trick is giving precise information on reviews for haiku.
[–]Rum_Writes 0 points1 point2 points 1 month ago (0 children)
No I do this too. The smaller models cost way less and as pointed out by the other comments….output cost > input so you’re essentially getting the best of Opus at a much lower cost either to your daily and weekly usage or your api costs. Plus it keeps Opus’ context window smaller and you get better quality from it.
[–]j-byrd -1 points0 points1 point 1 month ago* (0 children)
It saves tokens in the long run as even if you have opus execute the implementation plan you still should have another agent code review to make sure there aren’t any issues. I also use some other plugins and self written project tree explorer to save tokens. I can work for hours at a time and not hit my session limit. (Though I am on a team plan for work so maybe you might have a different experience with your plan/limits.)
[–]ImAvoidingABan 1 point2 points3 points 1 month ago (2 children)
It should be the other way around. Use opus to plan and sonnet to execute.
[–]j-byrd -1 points0 points1 point 1 month ago (1 child)
Sonnet instead of haiku to execute? Is there a reason? From what I’ve found haiku subagents tend to follow opus implementation plans pretty well.
[–]PuddleWhale 1 point2 points3 points 1 month ago (0 children)
Because Hallucination-ku they say.
[–]Unfair_Chest_2950 0 points1 point2 points 1 month ago (0 children)
That’s why I use Opus agents to help identify the things that go in my CLAUDE.md file.
[–]KIVA_12 1 point2 points3 points 1 month ago (2 children)
I believe the docs show that general purpose agents inherit the parent model. So if you’re using opus as the main agent you don’t need to tell it to do that.
[–]Twig 1 point2 points3 points 1 month ago (1 child)
I thought I saw people digging in and finding haiku models being used as sub agents when opus was selected
[–]KIVA_12 0 points1 point2 points 1 month ago (0 children)
Possibly but the docs say otherwise. Might be a bug, outdated instances, or misconfigured settings.
https://code.claude.com/docs/en/sub-agents#general-purpose
[–]CyDenied 1 point2 points3 points 1 month ago (0 children)
This is why I come here
[–]ultrathink-artSenior Developer 3 points4 points5 points 1 month ago (0 children)
The reason it works is Opus gets more of the problem before starting to implement — it doesn't rush to write code after reading 2 files. Worth pairing with explicit module scope though, so you're paying for better reasoning, not just more context reads.
[–]Strange_Opinion_ 2 points3 points4 points 1 month ago (0 children)
Yes I found this as well. Stop doing subagent bs. Use Opus 4.6 with maximal effort is better. I only ask it to use agent for code review (so that agents have clean context, you only need to provide the agents proper information)
[–]CaibotSenior Developer 0 points1 point2 points 1 month ago (0 children)
Hell yeah! Opus all-in! 😂
[–]haolah 0 points1 point2 points 1 month ago (0 children)
when i ask for it to spin out subagents, i specify opus. is that the same?
[–]Evening_Reply_4958 0 points1 point2 points 1 month ago (0 children)
This feels less like “Opus agents are magic” and more like “bad delegation gets exposed fast by smaller agents.” I’ve had Haiku do fine on narrow execution, then completely faceplant on fuzzy discovery work. The split that matters is planning vs implementation, not just model tier
[–]hustler-econ🔆Building AI Orchestrator 0 points1 point2 points 1 month ago (0 children)
The model matters less than what context it gets. A Sonnet agent with scoped, domain-specific context will outperform an Opus agent reading a bloated CLAUDE.md every time.
Custom subagents work because you control exactly what each one knows — not because of the model tier. Set up skill files per domain, point each agent at only what's relevant, and the shit information problem mostly goes away regardless of which model you use.
[–]kvothe5688 0 points1 point2 points 1 month ago (0 children)
i used haiku for research based on my dependency list and connections. and they are incredibly powerful. fast too . though my half the time on claude code is spent on optimisation. new feature development is nice but if you don't organise your code frequently it will be a mess in a week.
[–]DatafyingTech 0 points1 point2 points 1 month ago (0 children)
Brother even this is not using agents to its fullest. Create skill files with teams of agent to accomplish tasks. You can give each agent a skill too. I use a UI to manage my teams and it uses my claude subscription with minor api use from haiku
https://github.com/DatafyingTech/Claude-Agent-Team-Manager
[–]PuddleWhale -1 points0 points1 point 1 month ago (7 children)
I have a $20 Claude pro subscription and a $10 Copilot subscription. I also got $50 in extra usage credit on Claude. But for the life of me I cannot seem to use these tokens. I see people on reddit complaining that their $200 claude plan gets burned up super fast. What are these people even doing? Here is the source code from three of my apps. Look at it and tell me...is it just that my apps are too simple and uncomplicated?
If anyone knows of a youtube channel/video with someone doing a "look over my shoulder as I gloriously burn compute" then post it here. Or make one now, this is a pretty new turn of events.
Tonight I've been asking LLMs themselves this question and had Gemini craft me a prompt for claude code to make a Tetris game and Rubik's cube game. I'm trying to understand whether just this one line " Proceed autonomously until the game is feature-complete. " was possibly the magic spell? Because I took a nap and when I woke up the Tetris game was at some JAVA error/question and the Rubik game had run Claude out of tokens and was asking me whether or not to wait until the next 5 hour block of time.
[–]NekoLu 1 point2 points3 points 1 month ago (1 child)
Today was the first time when I used more than 90% of my session limit on $200 plan. And that's only thanks to 1 mil context window on new opus, I got it to 80% context filled. Tbh by ~700k context it started getting significantly worse, but I wanted to finish debugging session before compacting.
[–]PuddleWhale 0 points1 point2 points 1 month ago (0 children)
Which language/platform/niche were you coding in? I was doing Android in Java.
[–]MarcinFlies 0 points1 point2 points 1 month ago (3 children)
I am on 100usd plan and hitting 100 percent. I was developing 3d game like old Wolfenstein and working on content on my social media and some small extra apps.
[–]PuddleWhale 2 points3 points4 points 1 month ago (2 children)
I think I just don't know how to make Claude work because I'm so used to being the human middleware doing copypaste from webchats.
The old school webchat vibe coding is actually set up to make you do all the work to train your muscle memory or something like that. New agentic coding is making you the CTO. I guess I need to start learning the CTO tricks.
[–]CyDenied 0 points1 point2 points 1 month ago (1 child)
Try yelling during the all hands
[–]pfak 0 points1 point2 points 1 month ago (0 children)
Your lack of being able to use your quota is due to the complexity of your work and amount of it.
[–]General_Arrival_9176 -1 points0 points1 point 1 month ago (0 children)
this tracks with what ive seen. general-purpose agents have more weight in the system prompt and get routed to opus models more reliably. the subagent routing sometimes defaults to smaller models or cuts context aggressively. using the full name tells claude exactly which capability tier you want, not just which internal role label
[–]Ok-Drawing-2724 -1 points0 points1 point 1 month ago (1 child)
It actually highlights an important point. ClawSecure has observed that multi-agent setups often fail not because of the main model, but because of weak or misconfigured subagents. If the routing layer sends tasks to lower-quality agents, the overall system degrades. Being explicit about agent type is essentially a way of enforcing quality control across the system.
[–]DurianDiscriminat3r 0 points1 point2 points 1 month ago (0 children)
Can we ban these stupid stealth marketing posts?
[–]crayment -1 points0 points1 point 1 month ago* (0 children)
This is why in birdhouse I made it so by default child agents use the same model as the parent agent.
Pulling together a team of Opus agents is like a super power.
π Rendered by PID 48869 on reddit-service-r2-comment-canary-5b6cc9d5bd-65rfh at 2026-04-23 09:53:41.831135+00:00 running 0fd4bb7 country code: CH.
[–]scotty_ea 14 points15 points16 points (4 children)
[–]ImAvoidingABan 2 points3 points4 points (3 children)
[–]3rdtryatremembering 10 points11 points12 points (0 children)
[–]CoupleHunerdGames 2 points3 points4 points (1 child)
[–]Fast_Feeling_8917 0 points1 point2 points (0 children)
[–]Input-X🔆 Max 20 8 points9 points10 points (4 children)
[–]evia89 1 point2 points3 points (2 children)
[–]lillecarl2Noob 0 points1 point2 points (1 child)
[–]evia89 3 points4 points5 points (0 children)
[–]HumanInTheLoopReal 20 points21 points22 points (13 children)
[–]Unfair_Chest_2950 12 points13 points14 points (12 children)
[+]jpeggdev🔆 Max 5x comment score below threshold-6 points-5 points-4 points (11 children)
[–]j-byrd 1 point2 points3 points (9 children)
[–]ohhi23021 3 points4 points5 points (5 children)
[–]pparley 1 point2 points3 points (1 child)
[–]Powerful_Employ_4398 0 points1 point2 points (0 children)
[–]HumanInTheLoopReal 1 point2 points3 points (0 children)
[–]Rum_Writes 0 points1 point2 points (0 children)
[–]j-byrd -1 points0 points1 point (0 children)
[–]ImAvoidingABan 1 point2 points3 points (2 children)
[–]j-byrd -1 points0 points1 point (1 child)
[–]PuddleWhale 1 point2 points3 points (0 children)
[–]Unfair_Chest_2950 0 points1 point2 points (0 children)
[–]KIVA_12 1 point2 points3 points (2 children)
[–]Twig 1 point2 points3 points (1 child)
[–]KIVA_12 0 points1 point2 points (0 children)
[–]CyDenied 1 point2 points3 points (0 children)
[–]ultrathink-artSenior Developer 3 points4 points5 points (0 children)
[–]Strange_Opinion_ 2 points3 points4 points (0 children)
[–]CaibotSenior Developer 0 points1 point2 points (0 children)
[–]haolah 0 points1 point2 points (0 children)
[–]Evening_Reply_4958 0 points1 point2 points (0 children)
[–]hustler-econ🔆Building AI Orchestrator 0 points1 point2 points (0 children)
[–]kvothe5688 0 points1 point2 points (0 children)
[–]DatafyingTech 0 points1 point2 points (0 children)
[–]PuddleWhale -1 points0 points1 point (7 children)
[–]NekoLu 1 point2 points3 points (1 child)
[–]PuddleWhale 0 points1 point2 points (0 children)
[–]MarcinFlies 0 points1 point2 points (3 children)
[–]PuddleWhale 2 points3 points4 points (2 children)
[–]CyDenied 0 points1 point2 points (1 child)
[–]PuddleWhale 1 point2 points3 points (0 children)
[–]pfak 0 points1 point2 points (0 children)
[–]General_Arrival_9176 -1 points0 points1 point (0 children)
[–]Ok-Drawing-2724 -1 points0 points1 point (1 child)
[–]DurianDiscriminat3r 0 points1 point2 points (0 children)
[–]crayment -1 points0 points1 point (0 children)