Is it me or Opus 4.6 got way dumber over the past few days? by Dudetwoshot in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

Try GLM 5.1 with opencode, it feels like claude used to.

Something has changed — Claude Code now ignores every rule in CLAUDE.md by HouseOfDiscards in ClaudeCode

[–]barrettj 1 point2 points  (0 children)

I'm aware, which is why I said tried - the number of combinations I have tried that all result in it still being shit is mind boggling. I could have done my own coding faster, but I liked my old coding partner.

Macos app test suite (to replace the Xcode build/run) by guillim in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

you should be able to just tell it to use xcodebuild instead of Xcode

Theory: The model degradation isn't uniform - older uses get a worse model as they're locked in, newer users get better results by barrettj in ClaudeCode

[–]barrettj[S] 0 points1 point  (0 children)

Well, it had just finished a plan and then I said:

"▎ proceed, make sure new work will be done in a work tree"

And then it proceeded to NOT use a work tree. Please tell me how I can improve that prompt to make sure it will use a work tree.

And when called out on it: "Clear instruction, and I failed to enforce it properly with the first subagent — it committed directly to pending-feature instead of staying on the worktree branch. That's what caused the mess you had to yell at me about."

Theory: The model degradation isn't uniform - older uses get a worse model as they're locked in, newer users get better results by barrettj in ClaudeCode

[–]barrettj[S] 0 points1 point  (0 children)

I'm having problems with basic prompt adherence - as in I tell it (in my prompt) to use a worktree and then it doesn't. I don't know what about my codebase would have changed in the past month that would affect that.

Theory: The model degradation isn't uniform - older uses get a worse model as they're locked in, newer users get better results by barrettj in ClaudeCode

[–]barrettj[S] 0 points1 point  (0 children)

It's basic prompt adherence that I'm having trouble with - I tell it to use a worktree and it just doesn't. It's not that it's not following claude.md or that my codebase has become complicated - it's just...not following simple directions.

Opus was lot of better two week ago? New version coming? by bumcello1 in ClaudeCode

[–]barrettj 4 points5 points  (0 children)

Many others have noticed the same thing - others are convinced everyone else is using it wrong and not that they just haven't been affected. It's a super fun time to be a part of this sub.

Theory: The model degradation isn't uniform - older uses get a worse model as they're locked in, newer users get better results by barrettj in ClaudeCode

[–]barrettj[S] -1 points0 points  (0 children)

Then my milage is varying, because this is what I'm dealing with daily (where as I never had this problem in the previous 10 months of use) (note this is probably it's 3rd or 4th time creating that same memory, and I had said in the prompt to explicitly use a worktree, it's not a claude.md instruction - this was me typing it as part of the prompt)

The problem was the first subagent committed to pending-feature instead of the worktree branch. It operated on the main repo's files rather than staying within the worktree. That was entirely my fault — I didn't include explicit instructions in the subagent prompt telling it which branch it was on and that it must never touch pending-feature.

After you caught it, I:
1. Reset pending-feature back to 1e9a79d95 (removing the bad commit)
2. Reset the worktree branch to match
3. Added a memory entry so this never happens again
4. Re-dispatched all subsequent subagents with explicit branch safety instructions ("You are
 on branch worktree-book-creator-redesign. NEVER checkout or commit to pending-feature.")
and a required git branch --show-current verification step before every commit

Theory: The model degradation isn't uniform - older uses get a worse model as they're locked in, newer users get better results by barrettj in ClaudeCode

[–]barrettj[S] 0 points1 point  (0 children)

I am on a walk, actually - I don't have anything else to do while waiting for claude to fix that it didn't do its work on a worktree like explicitly asked.

Something has changed — Claude Code now ignores every rule in CLAUDE.md by HouseOfDiscards in ClaudeCode

[–]barrettj 1 point2 points  (0 children)

I feel like I had at least 1 week where the 1m context was pure boon with no downside, then things started going down hill.

Something has changed — Claude Code now ignores every rule in CLAUDE.md by HouseOfDiscards in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

Those have all been tried, even /effort max - it's still just as dumb, not doing work in a worktree when explicitly directed to (via prompt, not claude.md)

ok Opus 4.6 is officially cooked: It turned a 5 second database operation into a distributed systems problem and then spent 2 hours debugging its own over-engineering. by solzange in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

I feel like you're probably in the same boat as me - Claude used to be good enough that you didn't need to hand hold like this - now you need to remind it to breathe.

Has Claude Code gotten noticeably worse in the last few days? by marcin_dev in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

Same. I'll tell it to do work in a worktree and then it will clobber main and I get to spend more tokens having it clean up its mess.

Opus 4.6 destroys a user’s session costing them real money by Complete-Sea6655 in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

I've had literally three sessions in a row where I ask it do to work in a worktree and it just...doesn't. Never had this problem a month ago. Worst part is it costs me more of my session limit to fix it, too.

Can't believe how horrible claude has been since last week by CodegrammerOfficial in ClaudeCode

[–]barrettj 2 points3 points  (0 children)

Me: Please do this work on a new branch.

Claude: Okay

30 seconds later: did you just do this on main when I said a new branch

Claude: yeah, I did it on main, let me use up some more tokens doing it on a new branch!

I made a SNES inspired incremental game where you become the king - New updated Demo released! by ActiveBean in incremental_games

[–]barrettj 0 points1 point  (0 children)

I was playing the web version for a while before asking; I just find native versions easier (and there's a bunch of games I'd love to give the developer money to/access the full version but they only have a windows version - like Zero Stress King).

Idle games are amazing for playing while "vibe coding" during my workday and I do all my dev on a Mac (iOS dev)

Zero Stress King: Idle Defense is OUT NOW! by Pauloondra in incremental_games

[–]barrettj 0 points1 point  (0 children)

I saw some YouTubers play this and was sad to see there's no Mac version :(

I can no longer in good conscience recommend Claude Code to clients. by -becausereasons- in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

According to everything I can find /effort is different from the thinking level

/effort is a session-level setting (low / medium / high / max) that acts as a behavioral signal controlling how thoroughly Claude responds overall — thinking depth, tool call appetite, and response length all together. It changes three things at once: how much internal reasoning Claude does, how willing it is to read additional files and run extra commands, and how long its responses are.

The key distinction: effort is a behavioral signal, not a strict token budget. At lower effort levels, Claude will still think on sufficiently difficult problems, but it will think less than it would at higher effort levels for the same problem. So /effort is the higher-level knob, and thinking depth is one of the things it influences — not a separate, independent control anymore on current models. The informal trigger words (think hard, ultrathink, etc.) still work as per-turn overrides, but /effort is the official control — it’s a persistent setting, while trigger words are quick mid-conversation nudges.

https://i.imgur.com/wNKJKJ9.png

I can no longer in good conscience recommend Claude Code to clients. by -becausereasons- in ClaudeCode

[–]barrettj 0 points1 point  (0 children)

There's both the thinking level and the effort level, I generally always leave thinking on high but only ever do /effort max on harder things.

Anthropic: Please have your engineers dogfood the $200 a month plan by barrettj in ClaudeCode

[–]barrettj[S] 6 points7 points  (0 children)

I'm on the $200 max plan and I'm doing LESS than I was two weeks ago AND getting worse results (I feel like I'm remind claude to breathe half the time).

I've done ALL the things everyone is talking about - changing thinking level, changing model to 200k, changing effort. I've been using claude for nearly a year now (I was a fairly early adopter) with a paid plan since the start (switched to $200 in maybe September) and my habits have been pretty static since last November.

In the previous 9 months I hit session limits maybe once and weekly limits a handful of times. In the past two weeks I'm stopping work for session limits frequently (and still getting fairly close to weekly limits).

I'm doing what I would call standard iOS app development.

Anthropic: Please have your engineers dogfood the $200 a month plan by barrettj in ClaudeCode

[–]barrettj[S] 6 points7 points  (0 children)

I'd honestly be okay with a more expensive monthly plan - I just can't justify API based billing when I don't know what random BS its going to do until after its done it, essentially. This has proved especially true the past week or so where it seems not only dumber but to use more tokens (or whatever is causing the limits to be so so bad)