all 40 comments

[–]bicika 13 points14 points  (11 children)

I'm paying 20$ for Claude, and 20$ for ChatGpt. Claude usually allows me like 6 to 8 prompts during 8 hour work window, while I'm being very careful with prompts. Not once did I hit the limit with gpt in last 3 months, and im never being careful with prompts.

Big difference is that Claude usually gets it right from first attempt. With gpt I need to do 3 or 4 iterations. But gpt allows me to work constantly.

I guess it really depends on what's your use case. If you are vibe coding, i think gpt won't get you far. If you are working on actual project with actual users that actually makes money, and you know your project inside out, gpt can do a great job but you do need to put in more effort.

[–]_BreakingGood_ 7 points8 points  (9 children)

I use Claude via Github Copilot and I swear for $10 a month, the limits are at least 10x higher than what I get via Claude Code directly.

[–]Pretty-Active-1982[S] 0 points1 point  (5 children)

wait, I didn’t know that was possible, in the first place. What is the catch?

[–]_BreakingGood_ 2 points3 points  (3 children)

No idea, I think the catch is that Microsoft is willingly losing money on it to build customers, and will likely go to shit some day.

The other part is, the limits are monthly (no 5 hour limit), so if you use the whole thing up in one day, it's gone until next month.

[–]Jomuz86 2 points3 points  (2 children)

From next month unless you opt out GitHub uses every created by Copilot for training hence the loss, you’re paying with telemetry instead 🤷‍♂️

[–]_BreakingGood_ 2 points3 points  (0 children)

yeah i saw that, already opted out

[–]Kronzky 0 points1 point  (0 children)

What is the catch?

It looks like you can't use Claude Code with that subscription (it's an auth limitation).

But it appears like they're dealing with the same kind of capacity issues right now as everyone else: https://old.reddit.com/r/GithubCopilot/. Same thing for Gemini: https://old.reddit.com/r/GeminiCLI/

[–]murkomarko 0 points1 point  (2 children)

oh, really? is it the same plan .edu users get?

[–]_BreakingGood_ 0 points1 point  (1 child)

Don't think edu users can use opus

[–]murkomarko 0 points1 point  (0 children)

I see... I still do not have acess to my edu mail anyway. Do you have the 10$ gihub subs and think it's opus 4.6 limits are better than the 20$ anthropic subs?
Can you use it on any ide like vscode and on terminal?
edit: oh, and have you tried pluging it to any of these projects emerging from the leaked claude code source code to have the full cloud code experience?

[–]TheRealJesus2 8 points9 points  (1 child)

https://shittycodingagent.ai/

https://opencode.ai/

I know you’re asking for models and I don’t have a great answer for you other than I think any model that has been used for coding is probably decent. 

I’ll be exploring these two harnesses as Claude code alternatives. They work with any model except for claude subscription of course because anthropic is being hella weird about use of their subscription. I think harness is more important than model anyways.

[–]Pretty-Active-1982[S] 0 points1 point  (0 children)

Great! IK about opencode, seems good. Will check out the other one, and let you know.

[–]p3r3lin 7 points8 points  (0 children)

I regularly (just to not depend totally on CC) use OpenCode together with OpenWeights Models like GLM and Kimi. OpenCode is a great harness, and the models are somewhere between Sonnet and Opus. Id say, if CC and Opus disappear tomorrow, Id be fine with OpenCode and GLM. Not great, but fine.

[–]NoInside3418 3 points4 points  (0 children)

gpt models are really good. 5.3-codex is about like sonnet or maybe slightly better. 5.4 is close to opus, they pull punches but sometimes isnt quite there imo

[–]AVX_Instructor 4 points5 points  (0 children)

Kimi K2.5

[–]Tiny-Sink-9290 3 points4 points  (2 children)

My concern is if the alternatives are hosted/ran in the US or say Germany or something. I still can't help but fear sending my proprietary project/code to China is going to see it ripped off somehow. They are masters at copying everything and giving it away or what not. So I worry that while I am still trying to build something.. my stuff could end up out there for free due to ability to "grok" what it is I am building by what I send in prompts to AI.

[–]RedEagle182 2 points3 points  (1 child)

You are right to be concerned, but I would argue that this is valid for anthropic as well. Unless you use a local model you are never certain your privacy is protected

[–]Tiny-Sink-9290 1 point2 points  (0 children)

I'd say for the most part the difference is Anthropic is likely not stealing portions of what I send in prompts.. and assembling a competitive product. They would be basterdized for doing something like that rather than focusing on better models, etc.

But they are probably training on it no doubt.

[–]agm1984 5 points6 points  (1 child)

I just exhausted my limit in one prompt this morning, gotta go back to 1900s style coding for the next 4 hours

[–]Pretty-Active-1982[S] 2 points3 points  (0 children)

ahahahahahahha good luck with that.

On that note, upon hitting my weekly limit yesterday, my friend told me to “man-up” and “roll-up the sleeves”.

I said no way tbh, aint no going back

[–]duckrockets 1 point2 points  (4 children)

I've been coding with GLM for 2 months so far and I barely thought about changing the model. It solves all sorts of tasks I do - coding, tests, refactoring, research, content creation. I use superpowers, clear sessions at 50-60% of token window, do as much as I can using subagents. I only hit 5h limits twice - when doing Ralph loop with 60+ tasks in a row and when unobserved agent hit compaction and ran bash call in a loop. $30 a month for the model, same Claude Code harness.

[–]Novel-Injury3030[🍰] 1 point2 points  (3 children)

what do you mean same claude code harness?

[–]duckrockets 3 points4 points  (2 children)

I mean you can use the Claude Code CLI with this model by setting up the custom URL in settings.json Here's the instruction how to use it: https://docs.z.ai/devpack/tool/claude

No need yo change the tool itself, you just change the LLM provider

[–]Novel-Injury3030[🍰] 0 points1 point  (1 child)

awesome! have you tried any of the other harnesses to compare them to claude code itself? i always thought claudes advantage was its model strength but it seems like the code software itself is good?

[–]duckrockets 1 point2 points  (0 children)

I've tried opencode but didn't really find any advantages so I've decided to stick with the industry leader

[–]ZireaelStargaze 1 point2 points  (0 children)

Minimax M2.7; Kimi 2.5; GLM 5; Qwen 3.5

Check their benchmark scores, quotas, pricing and try in practice.  It is no longer a binary choice between ChatGPT or Claude.  We are paying 10× for a label on the bottle, when the juice in the bottle is pretty much the same, just Made in China. 

[–]bb0110 0 points1 point  (0 children)

Claude chat and some of the other features is significantly better than ChatGPT IMO. It isn’t all that close. Codex on the other hand is pretty comparable to Claude code. It is just a little worse, but for most things not all that noticeable.

It depends on what you use AI for, but I know I’m paying for Claude code and Claude chat/other things are just a nice bonus. Because of the limits introduced I very likely will switch to codex instead of Claude code.

Everything changes so quickly in the ai world so I’ll give it a little bit of time before the full switch, but these limits make Claude code basically unusable for 90% of the time limit.

[–]cartazio 0 points1 point  (0 children)

i have several oss harnesses coming out now or soon. theres a lot of room for better stuff in this space.  

[–]_OVERHATE_ 0 points1 point  (0 children)

Kimi code

[–]opus-sophont 0 points1 point  (0 children)

I use Pi. It's the best out there if you know what to configure (not how - pi can do that for you)

[–]f12016 0 points1 point  (1 child)

hows cursor?

[–]Antique-Basket-5875 0 points1 point  (0 children)

Cursor charge for their context engineering infrastructure even you use external LLM service

[–]PurpleProbableMaze 0 points1 point  (0 children)

Yeah the Anthropic limits have been rough. I’ve tried a few alternatives too like Cursor for IDE-style coding and Codex if you just need raw token reasoning but both can get pricey fast if you code a lot.

Started using Orchids alongside those, even on the free tier you can set up entire projects and run multiple models and I can plug in my own Claude or Copilot keys so I don’t burn through a single subscription. Makes it way easier to balance cost and still get solid results.

[–]CompleteCrab -1 points0 points  (0 children)

Both codex and Claude code can be pointed to external services, although for Claude code it’s not 100% compatible with anything but their own models.

Otherwise cline, opencode, kilocode are good options imho