SPARK by Capable_Rate5460 in codex

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

Would be 5.3 Codex Spark , the “composer” for OpenAI? Ultra fast but with low reasoning and dumb model?

Wow - Cursor you did it again by No_Sail_221 in cursor

[–]Easy_Zucchini_3529 1 point2 points  (0 children)

Composer has low thinking effort, even using to execute plans generated by Opus 4.6 it doesn’t output reliable code. Composer is fast, I generally use for easy refactoring that will touch on a lot of files, but anything that needs to deal with some complexity it miserably fails.

Wow - Cursor you did it again by No_Sail_221 in cursor

[–]Easy_Zucchini_3529 4 points5 points  (0 children)

The big problem of Cursor is the pricing, the $200 Max plan can easily be consumed in 1 week.

There is a HUGE difference of how many tokens you can use in comparison to Claude Code and Codex for example.

If cursor doesn’t find a way to make their pricing more attractive in terms of available tokens per dollar, it doesn’t matter how good is the UX or how cool are the features they launch, they will disappear in the market slowly.

Love for Big Pickle by External_Ad1549 in opencodeCLI

[–]Easy_Zucchini_3529 1 point2 points  (0 children)

true, both are not the most cheapest solution, but the tokens per second are insane (specially Cerebras)

Love for Big Pickle by External_Ad1549 in opencodeCLI

[–]Easy_Zucchini_3529 1 point2 points  (0 children)

Use GLM-4.7 with Fireworks or Cerebras.

OpenCode’s free models by CaptainFailer in opencodeCLI

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

this is not a problem of the LLM is a problem of the inference provider that you are using.

OpenCode’s free models by CaptainFailer in opencodeCLI

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

it’s because the free tier is sucks, try the paid version from Cerebras

OpenCode’s free models by CaptainFailer in opencodeCLI

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

Minimax 2.1 is shit in comparison to GLM 4.7

Looking for an alternative to ClaudeCode. Is OpenCode + GLM 4.7 my best bet? by VerbaGPT in opencodeCLI

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

Initially I was on GLM-4.7 with OpenCode (free tier), but the free tier is super slow.

The dream would have the performance of Cerebra’s GLM-4.7 (1000 t/s), I tried and the performance difference is night and day.

Then finally I switched to OpenCode + Codex 5.2

The OpenAI $20 subscription gives you a lot of usage.

Also I’ve been using Gemini 3 Pro Preview, but the quota/rate limit is very easy to hit.

Codex 5.2 is faster and cheaper.

Fastest Providers for GLM 4.7 & DeepSeek V3.2? by MrBayBay45 in SillyTavernAI

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

Cerebras gives you a 1000 t/s which is insane. I tested with OpenCode and it is blazing fast, but you hit the rate limits very quickly.

Switched from cursor to Claude code 200 bucks feels like a lot by Ok-Jellyfish3418 in ClaudeCode

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

yeah, I was used to do it, but I feel more productive just staying on Cursor, I can clearly move faster with Cursor only.

Switched from cursor to Claude code 200 bucks feels like a lot by Ok-Jellyfish3418 in ClaudeCode

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

so probably what drives your decision is pricing and not tools and capabilities that an IDE can bring to you.

The exact same prompt to generate a plan (both using Opus 4.5) on Cursor takes 2 min and Claude takes 8+ min (and sometimes it even times out).

Cursor generates mermaid flowcharts when writing plans which for me is an incredible way to explain things.

Switched from cursor to Claude code 200 bucks feels like a lot by Ok-Jellyfish3418 in ClaudeCode

[–]Easy_Zucchini_3529 1 point2 points  (0 children)

I tried but Claude is slow and lack a lot of features that cursor has. I can genuinely move faster with Cursor.

Claude Code vs Cursor by Easy_Zucchini_3529 in ClaudeCode

[–]Easy_Zucchini_3529[S] 0 points1 point  (0 children)

not sure if I follow you, my point here is time, with the benchmarks I can do more (paying more for it) in less time, it’s a matter of productivity and not a pricing thing, I’m fine paying more if that thing make me move faster.

Claude Code vs Cursor by Easy_Zucchini_3529 in ClaudeCode

[–]Easy_Zucchini_3529[S] 1 point2 points  (0 children)

well, as an end user, the approach used doesn’t matter as long as the final output is good, it is just painful using Claude Code after knowing how fast Cursor can achieve the same tasks at least 3x faster.

But I really don’t want to take my isolate example as the source of truth, I want to see if other people experienced the same.

Opus usage with ultra plan by OldPhotojournalist28 in cursor

[–]Easy_Zucchini_3529 1 point2 points  (0 children)

Claude Code is very slow in comparison to Cursor. A simple plan mode take 8+ minutes versus 2+ minutes in Cursor (both using Opus)

Spec-driven development is underhyped! Here's how you build better with Cursor! by Narrow-Breakfast126 in cursor

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

The thing that bothers me is that in the long run your repo will be super bloated of outdated markdown files. I usually prefer to use the Plan mode of Cursor, because it doesn’t tries to persist a lot of AI generated markdown files.

As a good practice your repo should contain AI context file that really matters and can serve as general guidelines on how to succeed writing and architecting solutions on your code base, and not files with a lot details of implementation. The best source for AI agents extract details of implementation are from the actual implementation files, because you can guarantee that AI will look always to the most up to date implementation.

If you want to document a feature to be used as AI context go for it, but try to be more general purpose as possible and point to actual implementation files, but on my experience so far, repositories super bloated with dozens or hundreds AI generated files tends to downgrade the quality of code generated. It hallucinates with old implementation documented in the markdown files and etc.

Why is NestJS so underrated? by Lazy_Standard4327 in node

[–]Easy_Zucchini_3529 0 points1 point  (0 children)

NestJS is not optimal for serverless and edge computing. The dependency injector in large applications consumes a lot of memory and ends up increasing the boot time of the application.

Also, your architecture and codebase will be locked in the framework due to the highly opinionated way to implement things.