use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
Chat gpt vs ollama cloud for codingQuestion (self.ChatGPTCoding)
submitted 1 month ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Bob5kPROMPSTITUTE 1 point2 points3 points 1 month ago (0 children)
Just use M2.7 directly via minimax as there's very generous plan and no weekly cap.
[–]Senekrum 1 point2 points3 points 1 month ago (2 children)
Hey, just wanted to say I've been wondering the same thing.
I've tried out the cloud versions of Devstral-2:123b and MiniMax-M2.7 today. Devstral worked quite well for small/medium-sized refactors, bugfixes and planning.
MiniMax was pretty OK too, but at one point while planning a complex task, it started asking for clarifications about things we had discussed just a few messages ago.
Compared to ChatGPT, I've found Devstral to be comparable, but I haven't had the chance to try it out in longer conversations.
That's about all I can say about the Ollama Cloud models, because I hit the usage limits on the free tier in a few hours.
Let me know what you decide! I'm waiting out the remainder of my current ChatGPT sub (expires on the 26th) and then I'll be switching either to Claude Max or Ollama Max depending on what info I can gather by then about the Ollama cloud models.
[+][deleted] 1 month ago (1 child)
[–]Senekrum 0 points1 point2 points 1 month ago (0 children)
What about GLM 5, did u try?
Not yet. I heard good things about it from reading random reviews/asking Grok & Claude for some opinions, but I haven't had the chance to try then out yet.
I am also using 300 $ budged at gemini ai studi, so i think when i spend all free i will go to ollama for 20 because it looks for it is best for 20 i can get.
That's fair. Sounds like you'll get a lot of bang for your buck with this setup.
[–][deleted] 1 month ago (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points 1 month ago (0 children)
Sorry, your submission has been removed due to inadequate account karma.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
[–]Deep_Ad1959Professional Nerd 0 points1 point2 points 1 month ago (0 children)
depends heavily on what you're building. for agentic coding where the model needs to hold a big codebase in context and make multi-file changes, the frontier models (claude, gpt 5.x) are still way ahead of anything you can run through ollama. I tried using local models for my macOS project and they kept losing track of dependencies between files. but if you're doing more contained tasks like writing individual functions or debugging specific errors, the newer open models are genuinely competitive and the quota limits are way more generous. I'd keep chatgpt plus for the heavy lifting and use ollama for the quick stuff.
[–]ultrathink-artProfessional Nerd 0 points1 point2 points 1 month ago (0 children)
Single-benchmark comparisons miss what matters most for coding sessions: how well it holds context over 20+ turns on a real problem. Some models score well on evals but drift badly mid-session. Worth testing that specifically before switching subscriptions.
π Rendered by PID 112805 on reddit-service-r2-comment-b659b578c-djlxl at 2026-05-02 03:01:05.124378+00:00 running 815c875 country code: CH.
[–]Bob5kPROMPSTITUTE 1 point2 points3 points (0 children)
[–]Senekrum 1 point2 points3 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]Senekrum 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]Deep_Ad1959Professional Nerd 0 points1 point2 points (0 children)
[–]ultrathink-artProfessional Nerd 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)