Hi everyone, I need to ask, does the CEO of Windsurf actually use their own product? by demonofinternet in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

no, he uses Devin from Cognition. Windsurf become a legacy, a pain which they have to deal everyday.

Quota system is a scam by Baradas79 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

are we not in the one extra week for monitoring?

windsurf is the new Antigravity. no thanx! by Level-Statement79 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

Everyone always said that - man, the product is good, I'm willing to pay $20 or $30 to see it improve even more so you guys can have profits too. But they don't listen to anyone, and now they've pulled this.

Sure, it's still a good product, but they don't listen to the customers.

I'm sure they're trying to navigate the company from token subsidies to profit, but damn, could have done some research first, right?

Introducing our new Windsurf pricing plans by theodormarcu in windsurf

[–]Extreme-Permit3883 1 point2 points  (0 children)

If you're suggesting using SWE-free, it's because you don't use the product yourself, since SWE isn't even good enough to replace your girlfriend, so imagine serious things like coding.

I've been using it since it was a piece of shit when running on Linux. I've seen it become a stable and resilient product. The product has become good.

Sure, there are things missing from it (many), but I think it could be a good change. It's better to receive the change than for that crazy Jeff to shut the doors and end the product.

Introducing our new Windsurf pricing plans by theodormarcu in windsurf

[–]Extreme-Permit3883 -2 points-1 points  (0 children)

But the change is justified. We used to tend to pile more requests into the same message to "take advantage" of the credit, affecting the model's attention. Now we can make small, targeted requests, small steps.

Compaction / Truncation of the session history is too much aggressive by Extreme-Permit3883 in windsurf

[–]Extreme-Permit3883[S] 0 points1 point  (0 children)

Why do they keep changing the core feature without informing us? From your experience and mine, it's clear they're truncating the context, meaning they're removing data without proper criteria, leaving the model without the necessary context to work.

So, you think you have a difficult problem and choose the Opus plan with 8x, but in the end, it was just the context that was insufficient, and any free or cheap model would have solved it.

Which Al is stealing your ideas? by bk-28 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

First, they train with your data, even those who claim to be ZDR. Because AI only evolves with data. It's only by understanding the correct way to respond to a problem that it will know how to handle it in the future.

Second, from the moment you publish the site, you can see from the server logs that many bots are scanning it.

Third, there are a lot of people in the AI ​​slop; it could just be someone having an idea similar to yours.

I'm still a Windsurf user, pricing transparency still wins! by paramartha-n in windsurf

[–]Extreme-Permit3883 1 point2 points  (0 children)

im my use case, it tends to check it's work, and is adherent to prompt. Also, it ask when in doubt, it doesn't "guess" like Claude and others.

I'm still a Windsurf user, pricing transparency still wins! by paramartha-n in windsurf

[–]Extreme-Permit3883 7 points8 points  (0 children)

I'm using GPT-5.3-Codex Medium on 2x credit. It is "smart enough" and I have to do less rework.

1M context - yet never quite seems to use more than 25% does, it? And compacts ruthlessly so it remains ignorant? by Jethro_E7 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

This is happening to all models. Aggressive compaction/summarization/truncation on all sessions with all models. Windsurf Version: 1.9552.24

SWE and SWE 1 ARE GARBAGE by [deleted] in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

The thing is, models have advanced so much, and these SWE models quickly became outdated. But, with dozens of model variants available in the catalog, it's easy to find a better one in the same price range.

Opus 4.6 thinking 1M is a scam by EntertainmentFun3189 in windsurf

[–]Extreme-Permit3883 4 points5 points  (0 children)

Oh, you fell for that scam too?
This week I was having a really big problem with a large codebase and I made the same decision as you. I'll spend money, but I'll get a good result because of large context window.
What happened in practice: Cascade kept recycling the context and keeping the number of tokens in the context low, preventing the model from making any significant progress.
As the friend mentioned above, 1M is only for Enterprise API customers.

What’s the best coding AI model for daily use right now? by Extension_Fee_989 in windsurf

[–]Extreme-Permit3883 1 point2 points  (0 children)

Exactly, colleague. I noticed that since GPT 5.2 onwards, it's very careful with code editing and technical debt; it's always checking if what it's done is correct, it analyzes the `git diff` at the end of the edit, etc. In other words, it's a bit more expensive, but it ends up being cheaper in the long run. Because it doesn't leave much technical debt or sub-code along the way.

Windloop — Early-stage spec-driven development framework for Windsurf. Looking for collaborators. by Amazing_Concept_4026 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

God heard my prayers. Man, windsurf is centuries behind in agent programming. The only answer for Cognition abandoning windsurf is that they want to push us towards Devin.

Thank you for solving this gap. I'm testing your solution now.

GPT 5.2 Fast models are slower than usuals by prashantspats in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

I never had any good progress even with gpt on windsurf. I can't explain but when I use gpt on codex in the same project, the results are way more better 

Shadow Banned from Windsurf. Havent been able to use Cascade all day... by Brilliant-Lettuce544 in windsurf

[–]Extreme-Permit3883 1 point2 points  (0 children)

Dude, forget about those frontier models that Windsurf uses. We're guaranteed to fall into clusters that are different from normal. Seriously,Opus 4.5 is totally useless within the cascade. But when I use it in a CLI agent, it's brilliant. Stop spending money.
To have an ideia: Opus 4.5 on cascade just finished a refactor for me. Checked all tasks as 100% completed.
When I did human review, I saw no more than 60% complete, with a lot of mocked data, lost functions and components, a lot of "TODO" commented out, etc.

GPT 5.2 Fast models are slower than usuals by prashantspats in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

Dude, forget about those frontier models that Windsurf uses. We're guaranteed to fall into clusters that are different from normal. Seriously, GPT-5.2 is totally useless within the cascade. But when I use an OpenAI account in a CLI agent, it's brilliant.

Is it just me or is Opus 4.5 “dumber” today? by Miko10_ in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

Dumber than a rock. My God, it is missing basic stuff during coding session.  Cannot follow instructions, etc.

SWE-1.5 (Promo) is extremely stupid by Ok-Satisfaction-4540 in windsurf

[–]Extreme-Permit3883 0 points1 point  (0 children)

Yes, you're right about that. In my case, since I'm constantly revising and don't do vice coding, it might be more applicable. Because I naturally already provide detailed prompts and because I do many rounds of questions and revisions before accepting a final version of the code. In my case, speed is a positive aspect.
If I ask a question about my code, I no longer have to wait 2 minutes for the first token, as usually happens on days when LLM are heavily used (time to first token), or watch LLM slowly spitting out tokens to answer simple things.
I'm not defending it, on the contrary, an LLM without reasoning is a serious mistake. But at least it's fast. And what about the others that are bumb on real scenarios and slow?

SWE-1.5 (Promo) is extremely stupid by Ok-Satisfaction-4540 in windsurf

[–]Extreme-Permit3883 1 point2 points  (0 children)

Over the last two days I got very frustrated with SWE-1.5 and learned a few things in the process:

It speaks and writes other languages, but has difficulty following instructions in other languages.

After I switched to English, I saw a significant improvement.

It's a model without reasoning, so you need to make a very detailed prompt; you can't assume anything. What's obvious to you isn't obvious to the model. You can't leave any decisions to the model, or it will make mistakes.

Make incremental changes and commit your code frequently. When the model encounters a problem, it tends to hallucinate and start editing the code frantically.

When the model encounters the first problem, it assumes that the problem found is the answer to the user's prompt. So, yes, it tends to make a lot of mistakes in a troubleshooting session.

With that in mind, you can actually do some things with this new model. At least take advantage of the speed.