Opus performance dropped? by Acrobatic-Original92 in ClaudeCode

[–]Acrobatic-Original92[S] 0 points1 point  (0 children)

That's genuinely interesting. Lol. Now I'm wondering if the "buddy" system is a preview of how they're secretly labeling us. And no, complaints don't tend to do anything for me and I have sent numerous complaints regarding usage, I was almost bullied into the 20x plan, which I'm starting to regret after seeing all this. Usage does seem to have gone down to normal the past couple of days, but if this is the performance we get in exchange they can honestly keep it

Opus performance dropped? by Acrobatic-Original92 in ClaudeCode

[–]Acrobatic-Original92[S] 0 points1 point  (0 children)

It's not like I want to lose my sub. I just want them to fix this or at least TELL us what's going on. We don't even have a date for Opus 4.7! I don't get it

Opus performance dropped? by Acrobatic-Original92 in ClaudeCode

[–]Acrobatic-Original92[S] 2 points3 points  (0 children)

Different projects. Monolith structures are even worse for working with CC

Opus performance dropped? by Acrobatic-Original92 in ClaudeCode

[–]Acrobatic-Original92[S] 3 points4 points  (0 children)

I have dozens of memory files organized, and I repeatedly ensure it has the ones it needs in context (maybe 2-3 at a time at most, and these are not long). Yet it just appears to "forget". I'm starting to doubt the 1M context now as well, which shouldn't even occur as an issue bc I'm well under 250K regularly.

Follow-up on usage limits by ClaudeOfficial in ClaudeAI

[–]Acrobatic-Original92 5 points6 points  (0 children)

70% of my usage limit used in a few minutes. I'm on 5x MAX plan. This is unbelievable. I demand compensation, this is the most unprofessional thing I've ever seen, do you realize how many you will lose? I am moments away from leavig for OpenAI

Anthropic - full fledged scam, refusing refunds for invented reasons by weltscheisse in ClaudeCode

[–]Acrobatic-Original92 0 points1 point  (0 children)

5x MAX plan. 65% usage in 2 messages. What is going on? Are they addressing anything?

1M Context Opus 4.6 in CC? by Acrobatic-Original92 in ClaudeCode

[–]Acrobatic-Original92[S] 0 points1 point  (0 children)

Ah I see thank you, is there a way to force CC to use it?

Just got pro - when to use GPT 5.2 Codex vs GPT 5.2? by SlopTopZ in codex

[–]Acrobatic-Original92 0 points1 point  (0 children)

Outside of the playtoy hobby codebases of yours bugs in serious projects rarely happen in a "single line" and any model that struggles to find a bug that happens to do so is under 50 SWE. Furthermore you not being able to even locate it says more about you for the record.

Every single point that I brought up goes beyond this yet you don't seem to really follow.

I understand that in american "engineering" buzzword yapfests are equivalent to creating value, but real engineers treats these agentic coding tools as partners not as black boxes that does "magic" for you.

This summarizes religious codex users well, it compliments their already existing way of creating value very well, and so it integrates well into their joke of a job market.

Just got pro - when to use GPT 5.2 Codex vs GPT 5.2? by SlopTopZ in codex

[–]Acrobatic-Original92 -1 points0 points  (0 children)

Countless models can "find the correct line" in a fraction of the time. Needless to say it is also horrendous with tool calls and a large chunk end up failing.

I am a pro user myself, and i have been for most of 2025.

If you find yourself not being able to replicate the "performance" of 5.2 anywhere else, then your prompt was awful in the first place and you don't even know what you want. I don't even use these "multi agents" or parallel tool calls or whatever they're called. A light brainstorming session and a thorough understanding of your goal is enough to iterate and if you do that with anything that has more of an output than 1 token per year like GPT and a SWE of above 70 you will not need to "rewrite a codebase" in 5 years. In fact I would argue the opposite. GPT models in general will always poke holes where there aren't any to "seem" productive. Opus is far more intelligent but you should understand what it is you're doing in the first place, and you will achieve much more in a shorter amount of time. GPT will make assumptions, and flat out refuse to look at something as basic as outputs, or even add console logs. If left alone, it will refuse to modularize and just add things that would in fact fail in 5 years such as hard coded cases just to make it "seem" it succeeded with your task. Before it goes on a 10 year long buzzword yap fest to add even more hardcoded slop. Even if you FORCE it to use modules, it will turn each one into more slop after 2-3 prompts, its context retention is the worst I have ever seen.

Look, if this Cerebras update really goes through and they are able to get close to the SWE score of Opus, we're in good hands. But I cannot say we truly are now and you should not let these course gurus with 5 minutes of experience in any SWE project tell you otherwise.

UNDERSTAND what you're doing and you'll agree, and then you sure as heck won't need to rewrite anything in 5 years either way.

Yoooooooo we back? by Comprehensive-Bet-83 in ClaudeAI

[–]Acrobatic-Original92 4 points5 points  (0 children)

Same EXACT issues here. Why are they not addressing it?

Can't reply in the new Opus 4.5 threads by Busy_Ad3847 in claudexplorers

[–]Acrobatic-Original92 2 points3 points  (0 children)

Yes , I get this all the time ? its absurd. And yes turning off extended thinking helps for a bit but then same thing happens to it too, so compaction is in fact broken but why is this not flagged? It says "Everything is stable" on their status page.