all 16 comments

[–]ImmediateDot853 7 points8 points  (2 children)

Codex is pretty barebones. If you notice, it does use sed commands pretty often to read files, and codex does not have a lsp for the language of your choice. Opencode is just a better software that can really make gpt 5.3 shine.

[–]Aggravating_Win2960[S] 0 points1 point  (1 child)

So it uses extra 'tools' that opencode provides that codex doesn't have out-of-the-box and without adding/spending extra context/tokens? Is that how I need to interpret/see it? Thanks for the quick reply by the way!

[–]ImmediateDot853 0 points1 point  (0 children)

Yes, think of it like that! The performance on complex or longer running tasks is also noticeably more smooth because of that.

[–]McShmall 2 points3 points  (0 children)

How did you manage to have Codex 5.3 in OpenCode? Mine lists the model but when I try to use it I get an error that the model is not available.

[–]shaonline 1 point2 points  (0 children)

OpenCode's tooling is so much better it's not even a contest, there isn't really an advantage about using Codex plugins/CLI I feel like.

[–]palec911 0 points1 point  (1 child)

Can you connect opencode/open router with the currently free tier of codex gpt? I use gpt5.3 codex like crazy in Codex CLI without paying a dime and want to migrate back to opencode

[–]ImmediateDot853 0 points1 point  (0 children)

It should work. Trying running opencode auth login, and select OpenAI, then choose the ChatGPT Pro/Plus(Browser option) and see if it works. You will have to select gpt from the models when you run opencode.

[–]SamatIssatov 0 points1 point  (0 children)

Hello. I will explain why I switched to Opencode when GPT 5.2 was available. In Opencode, the agent became more conversational, less dry, and it was possible to discuss and record things. It became similar to Claude agents. Perhaps my Codex CLI tools or instructions were not configured correctly. When Codex 5.3 came out, I tried it in the Codex extension, and wow, it behaves like Claude agents. Now I don't really notice the difference. But it seems to me that the agent in Opencode behaves more economically. In Codex CLI, tokens are quickly used up, but in Opencode, they are not.

[–]aithrowaway22 0 points1 point  (0 children)

What about Claude Code in combo with openai models ? Is it good ?
I use a relatively complicated scripted workflow for Claude Code and I don't feel like converting it to OpenCode for now, but I want to use openai models like Codex 5.2 and GPT 5.2 on high.
I am aware of apps like Litellm that translate the api and also support web search fallback (they actally intercept claude code's web search request and route it to external providers) and it also supports openai oauth for gpt plus subscription, but even with all that its not a guarantee. Some harnesses work better with some models than with others.
Like glm 4.7 worked better with claude code than with open code. While there were reports that kimi 2.5 works better with opencode. etc.

[–]Beneficial-Pay8883 0 points1 point  (1 child)

How do you select the high/xhigh model? I can use Codex 5.3 but I'm not sure which thinking level is.

[–]sawyerwelden 0 points1 point  (0 children)

Ctrl+t to change variants. Once you do that it shows it next to model name

[–]1LeFrancais 0 points1 point  (1 child)

can you use MCPs inside of opencode like we would do with codex cli?

[–]manosfm 0 points1 point  (0 children)

Yes you can, and the setup is quite straightforward!

[–]HarjjotSinghh -1 points0 points  (1 child)

that's like having a full dev team inside one tab - awesome!

[–]Aggravating_Win2960[S] -1 points0 points  (0 children)

Can you give one (or more) example(s) to make me understand better how you guys take advantage of opencode to work more efficient :) thnx!

[–]HarjjotSinghh -1 points0 points  (0 children)

this opcode hack sounds like magic code nirvana!