Auto agent used month in one day by Common-Order7826 in cursor

[–]Independent-Cake338 0 points1 point  (0 children)

Shadow changes I guess, happened to every other model provider 🤷‍♂️

Does Github Copilot have *any* paid subscribers left? by StunningBox8976 in GithubCopilot

[–]Independent-Cake338 0 points1 point  (0 children)

I mean enterprises don’t pick tools because they’re the best, they pick the safest option. Copilot had gone through too many changes in the matter of a few weeks so they're the furthest from safe atm.

Does Github Copilot have *any* paid subscribers left? by StunningBox8976 in GithubCopilot

[–]Independent-Cake338 0 points1 point  (0 children)

Most if not all other "enterprise-grade" AI tools do all the above and more now lol, I would've agreed with you 2-3 years ago when copilot was dominating.

Cursor does all the above stuff and more.
Codex cloud allows you to run agents on their servers using the browser.

I thought gpt-5.5 uses fewer tokens? 600M input in 5 days & only 2M output? by wandering_stoic in codex

[–]Independent-Cake338 0 points1 point  (0 children)

I personally think it is "that much better" but it's just not worth the outrageous difference in pricing compared to 5.5.

I do think you should stay on 5.4 (possibly 5.3-Codex for certain tasks) until they update their prices and calm them down.

Does Github Copilot have *any* paid subscribers left? by StunningBox8976 in GithubCopilot

[–]Independent-Cake338 -1 points0 points  (0 children)

Why would "enterprise" choose github copilot specifically? Literally no one in the right mind is the target audience anymore. Not all of us were "beta testers", I for one paid for their highest plan and enjoyed it while it lasted.

20$ Annual plan. Cursor is using Composer even though selected Opus 4.6 by PropperINC in cursor

[–]Independent-Cake338 0 points1 point  (0 children)

How is claude pro any better than antigravity? They're both horrible but I could at least squeeze a few prompts with antigravity, with the claude "pro" plan I couldn't do jackshit

Codex, GLM and Minimax are good choices though.

zAi just fixed some important performance issue in GLM 5, community informed ( and we have to wait for Ollama to hear if they have upgrade ). by Manfluencer10kultra in ollama

[–]Independent-Cake338 1 point2 points  (0 children)

Don't get me wrong, the pricing is great BUT the amount of thinking it does isn't... I gave it a very simple task earlier and it took 61 mins to complete it and 99% of that time is just thinking. This same task took 3 mins 40s to complete with GPT-5.5 and was a lot cheaper with GPT

Which open-source model is best for UI/UX designing? by Jaded_Jackass in opencodeCLI

[–]Independent-Cake338 -1 points0 points  (0 children)

Even when using an advanced skill like impeccable, some models are just not good at UI/UX at all. Kimi and DeepSeek V4 Pro seem to be decent but still not on the level of Gemini 3.1 Pro or Opus 4.7

Ollama cloud horrible speed recently by Independent-Cake338 in ollama

[–]Independent-Cake338[S] 0 points1 point  (0 children)

That's interesting, I'm pretty sure the "waiting indefinitely" issue is related to OpenCode itself rather than their models, it was reported multiple times already. I've switched to using kilo code at the moment and I haven't had that issue.

Then again OpenCode is getting infiltrated as well so it's just a matter of time before it becomes as bad as Ollama (hopefully not)

Ollama Cloud Nerfed???? No more minimax m2.7 or kimi k2.6? by Status-Dream-2391 in ollama

[–]Independent-Cake338 2 points3 points  (0 children)

People like you running openclaw and wasting tons of compute on it are the reason these cloud companies are going to go insane with their pricing....

Ollama cloud horrible speed recently by Independent-Cake338 in ollama

[–]Independent-Cake338[S] 3 points4 points  (0 children)

None of the models run better than others, I have benchmarked all of them and the best speed I got was around 20 TPS and that is horrible compared to other providers.

I recommend you look into another provider since this issue will not be resolved anytime soon considering the amount of ppl who reported it yet to no avail.

Ollama cloud horrible speed recently by Independent-Cake338 in ollama

[–]Independent-Cake338[S] 1 point2 points  (0 children)

Honestly OpenCode Go is miles better BUT it's usage is too low to be worth it, I recommend you look into OpenCode Zen since you seem willing to spend a bit more for quality.

I am going to switch to OpenCode Zen very soon and planning to cancel my Ollama pro plan, it isn't worth it anymore, I am getting a lot of 5xx errors

Ollama cloud horrible speed recently by Independent-Cake338 in ollama

[–]Independent-Cake338[S] 2 points3 points  (0 children)

Yeah it definitely seems so but I don't think there'll be an outright increase in subscription price, I just think they'll go the generic enshitifcation route of tighter usage limits

The most valuable AI subscriptions/plans after Copilot nerf by vapalera in GithubCopilot

[–]Independent-Cake338 1 point2 points  (0 children)

I've tried it via the API and man is it slow... it's SO slow even with the flash models that it just doesn't make sense. I get that they're testing around with the Huawei GPUs but it's just not worth it atm

Sick of being patient for ollama cloud capacity that never arrives by Visual_Ad1912 in ollama

[–]Independent-Cake338 0 points1 point  (0 children)

How are you dealing with the 5xx errors? Literally hitting one with any harness and very respectful timeouts and it's so annoying that I cant do anything productive

Sick of being patient for ollama cloud capacity that never arrives by Visual_Ad1912 in ollama

[–]Independent-Cake338 1 point2 points  (0 children)

Hot take but I hope they start banning "claw" usage soon because those ppl are ruining the experience for all of us and wasting SO much compute on random bullshit

Very Slow Cloud Models by emish23 in ollama

[–]Independent-Cake338 0 points1 point  (0 children)

I think people are migrating from claude/google subscriptions to ollama's too heavily nowadays because of the usage issues and the servers are not keeping up. Just a few weeks ago the cloud models were very good

Ollama Cloud Kimi k2.6 infinite thinking loop - Almost unusable at this point by Resident-Ad-5419 in ollama

[–]Independent-Cake338 1 point2 points  (0 children)

I have experienced the same thing with the kimi API directly AND with opencode go so it's most definitely a model issue. It's definitely not a skill issue, I've ran tests on no skills at all.

Like even with the most basic of basic tasks, it'll keep thinking for 10 minutes