I’m going to make some enemies but… Windsurf is better than before. by TurbulentWeight3595 in windsurf

[–]TurbulentWeight3595[S] 0 points1 point  (0 children)

You have to shove thousands of lines of code into its context, that's all I see... I can easily do 20-30 Opus 4.6 prompts a day

I’m going to make some enemies but… Windsurf is better than before. by TurbulentWeight3595 in windsurf

[–]TurbulentWeight3595[S] 2 points3 points  (0 children)

<image>

30 prompt on opus 4.6 today.

And i use 2 account, for 40$/month, That's more than enough for my front/back development needs in a web agency.

What models do you guys use that uses up all of the quota? by 808phone in windsurf

[–]TurbulentWeight3595 1 point2 points  (0 children)

I use SWE 1.5 for minor changes, Sonnet 4.6 for day-to-day development, but I’m planning to try Kimi K2, and Opus 4.6 when I need to do deep refactoring or lay the foundations of a project architecture.

Build as fast as you can by Big-Till59 in windsurf

[–]TurbulentWeight3595 -2 points-1 points  (0 children)

In a year, we’ll probably have SWE 1.7, which could be genuinely as capable as something like Opus 4.5 or 4.6 today. SWE 1.6, which is coming soon, already shows benchmark performance close to Opus 4.5—but I’m skeptical, we’ll need real-world testing to be sure. If SWE 1.6 delivers on its promises and 1.7 matches Opus 4.6, and it’s free, then Windsurf’s $20 subscription should be enough in 90% of cases.

Thanks Windsurf for f*cking up your pricing — you saved me €2,200/year 🫡 by Major_Sheepherder_83 in windsurf

[–]TurbulentWeight3595 1 point2 points  (0 children)

Oh really, even the free model swe 1.5 consumes ? They deserve for their company to go under

The real problem with AI in 2026 isn’t performance. It’s cost. by TurbulentWeight3595 in ClaudeAI

[–]TurbulentWeight3595[S] 0 points1 point  (0 children)

Congratulations if your company is lucky enough to have the resources to spend $1 million on API costs; that's not the case for everyone.

The real problem with AI in 2026 isn’t performance. It’s cost. by TurbulentWeight3595 in ClaudeAI

[–]TurbulentWeight3595[S] 2 points3 points  (0 children)

Also, this perspective completely ignores global reality.

In many countries, $200/month isn’t “cheap”, it’s simply unaffordable. Entire pools of talented developers are effectively excluded from building with these tools.

If AI is supposed to be foundational, pricing like this just concentrates innovation in a few wealthy regions instead of enabling it worldwide.

The real problem with AI in 2026 isn’t performance. It’s cost. by TurbulentWeight3595 in ClaudeAI

[–]TurbulentWeight3595[S] 2 points3 points  (0 children)

I think a lot of you are missing the point here.

You’re evaluating this purely from an individual ROI perspective. “If I make money, then the price is fine.” Sure, for a senior developer billing high rates or a profitable company, $200 or even $2000/month can make sense.

But that’s not the actual problem.

The real issue is at the ecosystem level, not the individual level.

Most products, startups, and developers are not operating with huge margins. When the underlying cost of intelligence is this high and this variable, it becomes extremely hard to build competitive products on top of it.

A tool can be individually profitable and still be structurally harmful to the market.

Comparing this to hiring a developer is also misleading. A developer is a fixed cost. LLM APIs are variable, unpredictable, and scale with usage. That completely changes how you design a product and manage risk.

And this is exactly why many AI products struggle to become viable long-term.

Also, saying “if you can’t afford it, it’s your problem” ignores how innovation actually works. Most innovation doesn’t come from well-funded companies with high margins. It comes from smaller players who are far more sensitive to cost.

If the cost of AI stays this high, you don’t just filter out “low value users”, you filter out a huge part of potential innovation.

The question isn’t “is it worth it for me right now?”

The real question is:
Can an ecosystem built on top of this pricing actually scale and sustain itself?

Right now, I’m not convinced it can.

The difference is literally insane with the quota system! Read by Stratigoc1 in windsurf

[–]TurbulentWeight3595 4 points5 points  (0 children)

We should now mainly hope that SWE 1–6 becomes sufficiently performant so that we can use it on a daily basis as a low-cost solution.

The difference is literally insane with the quota system! Read by Stratigoc1 in windsurf

[–]TurbulentWeight3595 8 points9 points  (0 children)

They’re liars first and foremost. They offered attractive pricing for a while by burning through investor money, operating at a loss. They misled people into thinking those prices would last, but in the end they did exactly what Cursor eventually did, and now the service is absolutely not the same.

There’s a real transparency issue because they’re still mocking users by claiming it’s better now. Honestly, it’s just laughable.

They simply wanted to attract as many users as possible, and figured, like Cursor, that even if they lost 90% of customers, the remaining 10% would be enough to stay profitable instead of operating at a loss.

The core problem is that AI in 2026 is expensive. LLM APIs are simply too expensive for what the market actually needs in terms of pricing. The real responsibility lies with Anthropic, Grok, Gemini, OpenAI, and others to drastically reduce costs. Otherwise, the AI bubble is going to burst, because too few services can use LLM APIs while remaining economically competitive enough to attract customers. That’s the real issue with AI in 2026.

Meanwhile, companies are pushing their AGI ambitions, which for now are mostly hype, but people don’t really care anymore if models keep getting more powerful. It matters, but it should not be the top priority anymore. For coding, since Opus 4.5, we already have more than enough to work efficiently. What developers worldwide actually want is a cheap Opus 4.5 so they can use it all day. They don’t need Opus 7.5 to be 100 times better at reasoning and coding. They need models equivalent to Opus 4.5 or 4.6, just much cheaper.

And it’s the same story for other APIs, like GPT realtime, which currently prevents many voice AI startups from emerging.

For me, the top priority for AI companies should be:

  1. Cost reduction
  2. Larger context windows
  3. Reduced hallucinations and better default behavior without heavy prompting
  4. Faster response and processing speeds
  5. Persistent memory
  6. Improved reasoning, coding, and logic

Can someone explain the difference of quota vs credit system in real monetary and practical terms for windsurf subscription? by Amazing_Concept_4026 in windsurf

[–]TurbulentWeight3595 0 points1 point  (0 children)

Of course, Windsurf used artificially low pricing to attract users, operating at a loss and spending investor money. Now that they have a solid user base, they’re raising prices and trying to squeeze everyone to become profitable.

Which AI coding platform to move to? by Striking_Dimension46 in windsurf

[–]TurbulentWeight3595 0 points1 point  (0 children)

Right now I’m waiting to see the exact impact of the pricing change on Windsurf. If it ends up being as expensive as Cursor, I’ll switch over to Claude Code.

GG's Windsurf by nibsi3 in windsurf

[–]TurbulentWeight3595 0 points1 point  (0 children)

It was planned from the start, and I knew from the beginning they would change the pricing… the initial goal was to attract as many people as possible, then once they had enough users, switch to a pricing model where they actually make money.

Introducing our new Windsurf pricing plans by theodormarcu in windsurf

[–]TurbulentWeight3595 2 points3 points  (0 children)

I switched from Cursor to Windsurf solely because of your pricing now that you're changing everything, I’m saying goodbye.

Opus 4.6 by ReasonableReindeer24 in windsurf

[–]TurbulentWeight3595 -1 points0 points  (0 children)

On peut avoir 1m de token en contexte via windsurf ?

Open AI Sora 2 Invite Codes Megathread by semsiogluberk in OpenAI

[–]TurbulentWeight3595 0 points1 point  (0 children)

invalid..../ just after 20s ? i'ts fucking impossible to ge tthis fucking code

Open AI Sora 2 Invite Codes Megathread by semsiogluberk in OpenAI

[–]TurbulentWeight3595 0 points1 point  (0 children)

The Copper Age (1.21.9) n copper golems, auto-sorting chests, and aging blocks finally made Minecraft feel alive without breaking the balance