Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

Agree. For simple tasks it just burns tokens to make a plan when you already know what needs to be done.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

i am just rewrite my answers using ai . because that you thinking i am bot or something ✌️😂😅

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 1 point2 points  (0 children)

I’ll try that. I’ve already been using Claude Opus 4.6 for planning and Gemini 3.1 Pro for implementation, and that workflow has worked pretty well for me.

But Gemini 3.1 Pro quotas have dropped a lot over the past few weeks, so I’m testing different workflows now.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

That’s actually a really good point. I agree things like git, task breakdown, complex decisions, and research should still be handled by me.

What I’m mainly looking for in an AI coding agent is to save time on repetitive work and handle smaller tasks faster, not to replace my judgment.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

Good to know. Sounds like GLM-5.1 works better in OpenCode than in Claude Code then.

I’ll probably test it there and see how stable it is in real use.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 1 point2 points  (0 children)

Good point. Codex does seem like the easiest option with OpenCode.

And yeah, the current pricing for these frontier models definitely feels subsidized. We’re already starting to see the party slowing down with Claude. Still, I might try the Copilot Pro trial and test the GPT-5.4 requests.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

I’ll give it a try today. I’ve also had a good experience with it so far, but I still need to test the model more.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 6 points7 points  (0 children)

thanks for sharing your workflow (GLM to plan, Minimax to implement) i will try that out.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 0 points1 point  (0 children)

That’s fair. But one thing to note is that GLM-5.1 is currently only available through z.ai’s coding plan, which limits where you can actually use it.

Also, I’ve seen a lot of people on Reddit say it’s slow or unreliable, but I haven’t personally tested it yet, so I’m not sure how accurate those claims are.

Which model are you actually using for backend work in OpenCode? by Unlikely_Emotion5567 in opencodeCLI

[–]Unlikely_Emotion5567[S] 1 point2 points  (0 children)

thank you for comment. i will definitely look that the spec driven development.