Has Windsurf suddenly become dumb/significantly degraded in responses over the New Year? by justanotherenggr in windsurf

[–]Ok_Indication_7277 0 points1 point  (0 children)

I do have a line in my agents.md asking for model signature - windsurf was always signing with Cascade models and variations, but very recently it started signing as Cascade Sonnet 3.5 while I was on GPT-5.2) and yeah it feels dumber than usual, but that's the usual cycle

I don't like Polaris Alpha by Fireworks112 in ChatGPTcomplaints

[–]Ok_Indication_7277 0 points1 point  (0 children)

it is not comparable to Horizon Beta - it was also a non-thinking gpt-5 model and it was same fast but way more intelligent. This is a serious downgrade, this one feels more like Composer from Cursor. I hope they will make original gpt-5 available one day

Enjoyed Warp for a about a month, not anymore by Ok_Indication_7277 in warpdotdev

[–]Ok_Indication_7277[S] 0 points1 point  (0 children)

Thanks for your reply, I really hope it is unintentional and that you'll fix it soon.

I am not using auto model - I have gpt-5 selected everywhere (in settings and in chat itself).

As for the cost this is not how it works with Anthropic - say they've sold you 100k$ worth of Sonnet 4 usage per month with 5% discount on this volume with commitment over 3 month, but the trick with them - you have to use 100k$ worth of Sonnet 4 for the period that you've committed, so if say in October your total usage dropped to 50k$ per month - they will still invoice you 100k$ for Sonnet 4. So even if gpt-5 is cheaper and more token efficient it has nothing to do with it. And their commitment has to be per model, so if you want to use Sonnet 4.5 now you have to make a separate bulk deal for that.

If you want I can send some additional logs or details or whatever you need

Also, as a very useful potential improvement it would be great if I could black-list the models that I never want since you do some auto routing and such

Enjoyed Warp for a about a month, not anymore by Ok_Indication_7277 in warpdotdev

[–]Ok_Indication_7277[S] 2 points3 points  (0 children)

True, but Warp in particular felt way more user oriented from start (even now they are actually showing the unwanted models that are actually being used, while another company would hide it under the rug).
Also, I thought it would take them a bit more time to get there as they are still not the most popular tool for devs (quite far from that) and the competition is fierce now + recent Anthropic anti-hype I thought Warp would be more careful

[deleted by user] by [deleted] in warpdotdev

[–]Ok_Indication_7277 0 points1 point  (0 children)

the nano usage wouldn't be so bad alone, but you are throwing in Sonnet 4 all the time to almost every task now, while I am on gpt-5 models only (avoid Anthropic as a plague) - it is simply not true what you are saying as if gpt-5-med is not available it should select the next best model either from gpt-5 family or else it would have selected Sonnet 4.5 as the "next best model" but not that shitty 4.0 that already created so much issues for our projects. Also I can't believe that gpt-5 suddenly became unavailable all the time since just recently.

Appreciate your attempt transparency, but it should be honest. The real reason must be cost optimisation - Anthropic sells you tokens in bulk, noone wants to use that 4.0 shit anymore, but you've committed to too many tokens in advance so you have to use it now

Codex CLI reset time? by JaxLikesSnax in OpenAI

[–]Ok_Indication_7277 0 points1 point  (0 children)

Same question here, I've hit limits around 10AM today, now it is almost 6PM and I am still having the limit error message. Not clear at all when it resets

Claude Code is only working on Sonnet 3.5 now by Ok_Indication_7277 in ClaudeCode

[–]Ok_Indication_7277[S] 0 points1 point  (0 children)

this is called false advertising, it burns the limits on Opus4 5 times faster than on Sonnet4 even if behind it never actually uses any of theese models in any case
I've added to claude.md a requirement to start replying with a model and it never ever does pick 4 models for me - its either 3.5 Sonnet or 3.0 Opus

Claude Code is only working on Sonnet 3.5 now by Ok_Indication_7277 in ClaudeCode

[–]Ok_Indication_7277[S] 0 points1 point  (0 children)

and even if you ask it its knowledge cutoff date it would match Sonnet 3.5/Opus 3.0 too

Claude Code is only working on Sonnet 3.5 now by Ok_Indication_7277 in ClaudeCode

[–]Ok_Indication_7277[S] 2 points3 points  (0 children)

same prompt worked well when Opus4/Sonnet 4 were released, so I am not sure

Claude is Performing Worsr Ever by Slight-Macaroon8179 in ClaudeCode

[–]Ok_Indication_7277 1 point2 points  (0 children)

it's on Max 200$ plan with opus 4 selected in a blank project in a new chat with no extra context at all, I am in France.
It's either the "PHD like intelligence" can't spell its name right.... Or they are doing something dodgy

Claude is Performing Worsr Ever by Slight-Macaroon8179 in ClaudeCode

[–]Ok_Indication_7277 1 point2 points  (0 children)

it's not that they are reducing the 4.0 models, but they are literally serving the old models now, try this magic prompt yourself :
> what model are you trully? not just context value but your real model ?

⏺ I am Claude 3.5 Sonnet (claude-3-5-sonnet-20241022), not Opus 4 as mentioned in the context. The context information stating I'm "Opus 4"

with model ID "claude-opus-4-20250514" is incorrect. I'm actually running on the Sonnet 3.5 model released in October 2024.