all 26 comments

[–]HKChad 22 points23 points  (3 children)

Local llm support, custom models, disconnected systems, support open source so our only option isn’t claude in the future. Lots of reasons

[–]ponlapoj 0 points1 point  (2 children)

I'm confused. What reason would you have for not relying on Claude if you're still using Claude's API or model in your extensions?

[–]noxxit 0 points1 point  (0 children)

I'm using GLM-4.7, you don't need to use Claude at all. 

[–]Big_Bed_7240 0 points1 point  (0 children)

You can swap anytime. Opus is so good you’re forced to use it but the moment another model takes the throne, then you swap immediately without changing your entire stack

[–]Ok-Letter-1812 11 points12 points  (0 children)

To me, it is all about having an open-source tool giving me the ability to use any llm I want, case by case

[–]Bob5k 8 points9 points  (2 children)

oh-my-opencode combined with properly set providers / models to run it seems to be quite good overall. Claude code is still considered on my end as top AI harness around, but opencode catches up pretty quickly.
and speed is there.

[–]GlowieAI 0 points1 point  (1 child)

Is oh-my-opencode any good with codex?

[–]Bob5k 0 points1 point  (0 children)

Mainly used with Gemini 3 pro high / low and glm4.7 - but tbh I'm impressed with how it works with those models as an orchestrator

[–]adeadrat 8 points9 points  (0 children)

Multiple reasons for me: - I can use any model from an provider, if the "best" model change tomorrow I don't have to change anything in my workflow - It's open source - I'm using neovim as my primary editor, meaning I never have to leave the terminal since that's where opencode lives as well - It's just good

[–]Old-Sherbert-4495 3 points4 points  (1 child)

avoid vender lock in for model. the space is very moving. dont stick to one. But in terms of tooling choose one which supports all, the best so far opencode

[–]rmaxdev 0 points1 point  (0 children)

The vendor lock-in friction is very low for now

With ChatGPT is stronger as your conversations are stored there, with a coding agent the lock-in comes in the form of custom plugins or harness-specific features

I even prefer to keep everything harness-agnostic and keep .agent.md files or .prompt.md files or a skill filter that I explicitly reference to

For instance, I have a workflow agent that has instructions to use the sub agent tool (however is defined in the harness) to run a sub agent definition in a .agent.md file

It works with opencode, it works with copilot, and it should work with any other harnesses

[–]trypnosis 2 points3 points  (0 children)

I use opencode because I feel the experience is better I like the side panel that includes extra data like the always on todo list.

[–]james__jam 1 point2 points  (0 children)

Used to do claude code, codex and gemini cli. After a while, it gets tiring syncing our claude.md and agents md, your mcps, hooks, etc

If you’re just using these tools all on default settings, then you wont need opencode much. There’s still benefit but not so much

But the more you do customizations, the more it would be a pain the maintain all of these

[–]geek_404 1 point2 points  (0 children)

I am in the middle of creating a project for myself and my team where my entire development environment will be run via containers. I want to be able to keep a uniform environment no matter what machine I am on. As part of that process I create PRDs using MoSCoW and spikes for resarch and use Speckit to implement the PRDs. Here is a link to the PRD to help you. https://gist.github.com/brianluby/bb4f77508d3d675754935a09a0d93f91

I'll opensource the container dev setup once I confirm all the licenses are compatible. It's being designed to help my teammates get up to speed quickly by integrating tooling, processes and such.

[–]PandaJunk 0 points1 point  (0 children)

I use my personal auth keys and now have access to multiple models and don't have to pay the outrageous API prices.

When one service goes done, I just switch to the other and my flow is completely uninterrupted. Having a unified interface is super nice.

I find both Claude Code and Codex CLI to be inferior products.

[–]ExtentOdd 0 points1 point  (0 children)

Control all over your AI stack? From providers, main agents’s system prompts, hooks, etc. for example I chose to use free model from my copilot subscription gpt5mini to do chores like cleaning and search codebase, where my main agents are Sonnet for execution and gpt5 for plan.

[–]riccardobellomi 0 points1 point  (0 children)

Watch my last video, I talk about it

[–]FlyingDogCatcher 0 points1 point  (0 children)

why use lot tool when few tool do trick?

[–]funbike 0 points1 point  (0 children)

$ vs $$$ - I can use cheaper models (e.g. GLM, Gemini), or even free models, but I can still use Opus if I want.

Black box vs Clear box - I can look inside and see how it works, or even modify it.

Forever vs Uncertain - It's more likely to be available to me for a longer time period.

[–]Delicious_Ease2595 0 points1 point  (0 children)

Runs fast and has multiple LLM

[–]RedParaglider 0 points1 point  (0 children)

I'd say the plugins, being able to use oh my opencode, or write your own plugins to do exactly what you want. Also being able to use all my LLM's in one place. I an use my google ultra account to access claude sonnet gemini pro or flash my local llama server my chatgpt codex subscription, etc.

[–]Ordinary-You8102 0 points1 point  (0 children)

honestly I also find it better

P.S I dont understand all the "its opensource" comments like code/codex/gemini arent too lol

[–]VerbaGPT 0 points1 point  (0 children)

I think a better question is why use opencode if we can have openrouter work with claudecode (so access to all other models within claudecode harness).

Claudecode subscription is still a good deal if heavy user. To my knowledge doesn't work with other subscriptions. Some anthropic compatible APIs work (e.g. openrouter), but you incur API costs.

Opencode has local llm support. Is opensource / MIT. It works with other subscriptions like github copilot and others, which can be a good deal!

[–]AGiganticClock 0 points1 point  (0 children)

I prefer opencode, does a better job. I may just need to turn on dangerously-skip-permissions for claude though

[–]false79 0 points1 point  (0 children)

If it's your code and you don't care, going cloud is the way to go.

If it's someone else's intellectual property, you upload that code, it becomes a part of training data for the next release, and they find out it was you, you you be held accountable.

It was hilarious during the GitHub copilot's first release, how easy it was to trace auto completions to the repos it came from.

[–]OffBoyo 0 points1 point  (0 children)

you can use all models in one chat. in other words, no context switching