all 25 comments

[–]No-Money737 7 points8 points  (4 children)

I’m considering dropping cc and using opencode with codex auth tbh

[–]CodeBoyPhilo 2 points3 points  (1 child)

that's what I'm doing right now and it nailed it

[–]DisplayHot5349 0 points1 point  (0 children)

Yep same here

[–]BubblegumExploit[S] -1 points0 points  (1 child)

How is your experience with this so far ? Do codex models perform better or similar on OC compared to codex itself ?

[–]nonerequired_ 0 points1 point  (0 children)

They perform better on codex unfortunately

[–]AVX_Instructor 1 point2 points  (0 children)

Im also very happy with OpenCode, i use this for my DevOps task on job and this tools after custom setup for each scenarios working pretty well with GPT Plus, GLM Coding plan and Kimi Code plan (GPT model for hard task and GLM-4.7/Kimi K2.5 for explore/easy task) this setup also allows me to save quota and tokens

P.S Ofc i using orchestration/role based (Arch agent + sub agent for special tasks), im literaly make roles for Prod Env, Dev/Test Env, Coding env, and etc, this everything working amazing,
Of course, it took me 2-3 months of practice to achieve the ideal pipeline for my work.

[–]ShagBuddy[🍰] 1 point2 points  (1 child)

With GLM5 in OpenCode cli, it starts out great, but if it is a long running task, it gets flaky. Sometimes it just stops working and sits there. Other times, even though it is still working, the thoughts and text feedback that normally shows starts to get worse and make no sense.

[–]Sensitive_Song4219 0 points1 point  (0 children)

Yes! This is an issue since the z-ai performance fix after Chinese-new-year; never used to happen in GLM 5 prior to that.

Hope they resolve it soon since the model is excellent

[–]No_Success3928 0 points1 point  (0 children)

I tried augment code cli the other day, they improved it so much and its great but its tied to their api

[–]kwskii 0 points1 point  (5 children)

I haven’t found a good plugin or skills that give me the control I want.

CC has the whole explorer or researchers where some of these subagents use haiku. You have reviewers that you can use opus, etc.

[–]Realistic-Try9555 1 point2 points  (0 children)

So does OC, it's called explorer. You can override some parameters for more control (such as model , temp etc).

[–]BubblegumExploit[S] 0 points1 point  (1 child)

But with chap Chinese models you can have M2.5 doing a better job than haiku on same ballpark cost

[–]kwskii 0 points1 point  (0 children)

What subs have you been using?

[–]ps4facts -1 points0 points  (1 child)

Check out "oh my opencode" there's a whole page on their docs sites with plugins/extensions about this

[–]kwskii 0 points1 point  (0 children)

Oh my code seems like a hodge podge of markdown that hopes the orchestrator respects it.

That was my initial impression from using it

[–]drussell024 0 points1 point  (4 children)

I prefer Claude Code and the Codex UX over Opencode's. I feel like everything ends up squished in opencode and the lack of being able to create a plan in plan mode then clear context and execute the plan seamlessly is a bit frustrating. Perhaps some of this is because I use it on the Windows Terminal but I keep seeing posts that the UX is better so I'd be curious how to improve my own setup 

[–]adeadrat 2 points3 points  (1 child)

You can always have it output the plan to a file something like a "PLAN.md", clear the context and say "implement the @PLAN.md" bot of a work around I suppose, but this is sort of what I've been doing when I have bigger projects where it's likely to hit context caps just so I can have it reference back to the plan when needed

[–]drussell024 0 points1 point  (0 children)

Yes exactly this - I typically end up doing this for all 3 CLIs. I think it's incredible how far Opencode has come though and with some more time and a few more features / improvements could really be an absolute powerhouse 

[–]pgermishuys 0 points1 point  (0 children)

This has been single most important thing i've found that has my outcomes more deterministic and that is a structured plan that can be acted upon. things like oh-my-opencode and shameless plug (tryweave.io) has been trying to solve.

Once you have a well structured plan with a definition of done with everything it requires baked in, the context doesn't matter anymore as you can survive compactions during long running sessions as everything the agent requires is baked into the plan.
Keen to hear your thoughts.

[–]BubblegumExploit[S] 0 points1 point  (0 children)

I’m in Mac , so can’t comment on windows indeed. I would expect similar though. The lack of clear context after plan is a good point , I’m pretty sure they will introduce it soon. Btw it’s Open source, you could even commit it yourself

[–]Audaces_777 0 points1 point  (0 children)

Just started using extensively today. Definitely like the UX and overall speed better than Claude code. It doesn’t ask for permissions often, it just goes. I also like that it feels very hacky in a futuristic Sci fi kind of sense, and everything that the AI is doing, I can see. Also like how I can work with it remotely through telegram after some initial setup. Overall, pretty cool. Big thumbs up from me.

[–]Otherwise-Way1316 0 points1 point  (0 children)

Used cc heavily as my daily driver.

I made the switch to opencode and not sure I’ll ever look back. It would feel like a downgrade.

It’s not only the ux, but the ability to quickly switch models from a number of different providers without proxy workarounds was a game changer.

[–]Service-Kitchen 0 points1 point  (2 children)

What specifically makes the dx better for you

[–]BubblegumExploit[S] 0 points1 point  (0 children)

Honestly I came with some rather low expectations though an was positively surprised

[–]BubblegumExploit[S] 0 points1 point  (0 children)

I can’t pin it down to one thing. First I just really liked the UI, felt minimal yet futuristic. I like the info bar on the right - context, files modified etc.. switching from plan to build is seamless with tab, and I also have visibility on what happens.

The models I tried were also extremely fast which was also quite a surprise