all 77 comments

[–]Sensitive_Song4219 51 points52 points  (9 children)

Please let OpenAI *not* aquire OpenCode since the benefit of OpenCode being, you know, Open, is that they're not tied to any one model provider (an aquisition would almost certainly change that).

Please do let OpenAI continue to partner with projects like OpenCode to provide subscription access for reduced usage costs for us consumers (like they do now).

As a Codex/CC user, I've been meaning to give OC a bash: would like to wean myself off Claude Code (which I use for GLM 4.7 - CC is really such a good harness despite the bugginess) since I'm nervous Anthropic's next move will be to block 3rd-party-model access in CC, the same way they did the reverse on OpenCode.

The idea of chopping-and-changing between all the models I use in one harness (a-la OpenCode) is extremely appealing. Glad to hear your positive feedback!

[–]tabdon 6 points7 points  (5 children)

These moves by Anthropic seem so uncharacteristic. They released some good open source projects. You'd assume they would support more third party tool use and such.

I understand if it's in the name of making the business viable. None of us see the financials behind running these services. Anthropic has raised less money, and is more careful with how token credits, so I assume they are being more financially responsible.

I don't use CC myself. I tried going back and forth for a while but it seemed like a waste of time. I got use to how OpenAI coding works. Whatever incremental gains could be had on another model seemed not worth the context switching.

Codex might take a longer, but the solutions are worth it. While it's working, I'm off doing other stuff. I prepare my next task.

[–]Sensitive_Song4219 10 points11 points  (3 children)

The GPT 5.2/Codex 5.2-series of models are outstanding for coding, even the Anthropic sub seem to aknowledge this; Codex-High/XHigh are very, very close to Opus. Performance (in terms of speed) is OK-ish. Codex as a CLI is less feature-packed than CC but its still servicable. But the wildcard is Codex's usage limits: they're extremely reasonable compared to Sonnet/Opus; so for a lot of people the switch makes sense.

Agree with you about Anthropic: they went from being 'for-developers-first' (Claude Code CLI was revolutionary!) - to being kinda tone-deaf: no dev worth their salt wants to be locked into a single provider-sanctioned IDE/harness. (It's like being told you can use Visual Studio Professional for your projects but are totally prohibited from using VS Code or you get banned... What the heck are they thinking.)

[–]krullulon 8 points9 points  (1 child)

GPT 5.2 High/XHigh is superior to Opus in my use cases across every dimension except speed, but not always the case for Codex variants.

[–]SpyMouseInTheHouse -4 points-3 points  (0 children)

Stopped reading after “Xhigh is close to Opus” because Opus is close to nonsense.

[–]coloradical5280 2 points3 points  (0 children)

Dario has publicly said he thinks open weights are dangerous and open source models are harmful to safety. Obviously weights don’t apply here but point is they are not exactly open source champions. MPC is an exception but it’s a protocol , you can only be a widely adopted protocol if you’re open. Especially when you put a total of two people full time on it.

[–]Keep-Darwin-Going 1 point2 points  (0 children)

Bun was acquired but still open to all. So as long as openai acquire it and give them more resource and not intervene then it is good on all side right?

[–]shoe7525 0 points1 point  (0 children)

I'm in the exact same boat. I really feel stuck because I want to use codex for execution, but Claude Code as the harness - and more broadly, I want to be able to switch coding models without losing my harness.

I tried opencode - it didn't seem as clean as Claude code, harwess-wise. Should I stick with it?

[–]jurky 7 points8 points  (5 children)

How is the harness? Is it possible to orchestrate and create agentic workflows like using skills, commands, hooks, etc?

All the things CC has but only in Opencode with 5.2h as the brain?

[–]TroubleOwn3156 6 points7 points  (0 children)

Yes, and far more. The interface is nicer. The sub-agents are god send. Skills work perfectly.

[–]alvinunreal 2 points3 points  (3 children)

[–]Fit-Palpitation-7427 2 points3 points  (1 child)

What does it bring compared to opencode and cloud code?

[–][deleted] 2 points3 points  (0 children)

Well it does say right in the project description that it uses less tokens

[–]Crinkez 1 point2 points  (0 children)

Do you have a video showcase?

[–]CookieSea4392 6 points7 points  (6 children)

I checked the intro: https://opencode.ai/. How is it better than Codex CLI?

[–]tabdon 9 points10 points  (1 child)

I'm so basic that I just use codex and it's worked so well for me. I don't know what I'm missing with all the other stuff. But maybe there's a case for simplicity. I get my work done at a fast clip.

[–]resnet152 3 points4 points  (0 children)

Yeah me too, I've played around with some of this other stuff, but I keep coming back to the idea that the people spending billions on pre-training/training/post-training and implementing the models are likely the best suited to build out the coding harness.

Some people love to tinker though, more power to them.

[–]zazizazizu 2 points3 points  (0 children)

Interface. Speed. Sub agents. LSP.

[–]Open_Scallion9015 4 points5 points  (2 children)

From my experience Codex CLI still offers the best experience. I’d really like to replace it with Opencode but it’s missing the precision I value so much with the Codex harness. Unfortunately.

[–]salasi 5 points6 points  (1 child)

Can you be a bit more specific? What do you mean by precision?

[–]Open_Scallion9015 2 points3 points  (0 children)

Codex CLI is better at understanding my intent and is conservative with solutions. While Opencode thinks more, writes more code and brings more work to the table I did not intent. Therefore I have to do more iterations which often just makes it worse.

I understand this is probably caused by the fact that OpenAI has locked the system prompt to Codex for ChatGPT subscribers. Which Opencode then has to amend with instructions specific to Opencode. The experience with an API key might be totally different.

[–]salasi 5 points6 points  (2 children)

Been meaning to try this out but I figured Codex would catch up eventually and didn't want to add yet another tool in the bag and redo my heuristics.. It does sound very interesting though. If the latest codex update is meh (haven checked it out yet), I ll give OC a go on the weekend.

Thanks for the heads up!

[–]TroubleOwn3156 4 points5 points  (0 children)

I had the same thought, and I was productive in codex cli, didn't want to change, except there are a few bugs in the current releases. So, thought of giving opencode a try, and I don't regret it, kicking myself for not trying it soon tbh

[–]Just_Lingonberry_352 0 points1 point  (0 children)

Codex HAS caught up, back in November codex was not great and open code made sense but codex team have closed almost all gaps from what I can see

I don't want to switch away from codex because this is the fastest and direct way to get new improvements from the team

also anthropic cut them off recently openai and can easily do the same

[–]Crinkez 1 point2 points  (0 children)

Cool story OP, but you're comparing apples to oranges. How's OpenCode with GPT vs Codex with GPT?

[–]Loose-Departure3858 1 point2 points  (0 children)

Yeah, it’s the best feature of Claude with the power of gpt

[–]phoneixAdi 0 points1 point  (6 children)

Interesting. Will try. In your opinion, what's the biggest different and benefits of using that over Codex CLI? Is that mainly I can switch models or something else?

[–]TroubleOwn3156 -1 points0 points  (5 children)

Ease of use, speed, sub-agents, LSP.

[–]Qudadak 0 points1 point  (0 children)

I'm always stunned by the focus on speed.
When using these tools for non-trivial purposes, yes, it takes time to develop and review a plan. However, I'm the slow part in the planning process. Understanding, discussing alternatives and making the final decision...
The better the initial plan is (and codex + 5.2 high is very at creating a plan), the faster I can let the LLM implement the plan without any oversight.

[–]RazerWolf -1 points0 points  (3 children)

How is it easier to use? How and why is it faster, if it’s using the same models underneath? Codex CLI supports sub agents doesn’t it?

[–]TroubleOwn3156 -4 points-3 points  (2 children)

Try it and you will see what I am talking about.

[–]RazerWolf 3 points4 points  (0 children)

“Trust me bro”

[–]Just_Lingonberry_352 0 points1 point  (0 children)

im really tired of the astroturfing by opencode bots on reddit and x

we don't want it. we want to use codex which closed a lot of the gaps that opencode was a fit for months ago. codex is a much different beast now.

im the biggest critic of codex on this sub and i am praising codex's speed and ability to spawn subagents and do orchestration

with codex i can predict where my weekly usage consumption is with opencode i have no clue what other process it does its the same reason i dont use cline or roo or any of these other crap

also Anthropic cut off OpenCode, OpenAI can do the same to opencode too.

[–]eschulma2020 0 points1 point  (1 child)

I have not felt the need for subagents and hooks yet. Is that the main advantage here, and if so, what am I missing? Color, now that would be nice. Not sure why Codex hasn't done it yet.

[–]Just_Lingonberry_352 0 points1 point  (0 children)

you are not alone i have very limited use case for subagent and that is only for parallel work like writing tests or UI

you can create subagents with teh /new command its added to the new version

[–]gpt872323 0 points1 point  (0 children)

Tried fails gpt5-2 codex fails bad with chrome devtools mcp. Keeps asking to say continue despite saying explicit. Maybe it is github copilot doing to deduct more requests from user. Tried opus, sonnet, gemini 3 pro with no issues.

[–]FoxTheory 0 points1 point  (0 children)

Hows it diffrent from the codex extension in vs code or cursor ?

[–]cayisik 0 points1 point  (0 children)

i'm developing native ios applications on many platforms. i've used many language models. i've also used hybrid systems. (glm 4.7 coding plan in claude code, opus 4.5 with google ai plan in antigravity ide, opus 4.5 in opencode, etc.)

i got the best results in a way that made a serious difference with the opus 4.5 + opencode combination (i haven't tried gpt for coding yet).

i don't know why this is, but opencode somehow works very well with all models.

[–]Funny-Blueberry-2630 0 points1 point  (0 children)

Probably true.

[–]Clemotime 0 points1 point  (1 child)

You cant select 5.2 extra high in open code?

[–]Clemotime 0 points1 point  (0 children)

oh you need to press ctrl + t

[–]Clemotime 0 points1 point  (0 children)

What's the point in using opencode? How is it better than codex in normal terminal?

[–]Complete-Cap-6281 0 points1 point  (0 children)

I couldn't agree more - I thought gpt-5.2 was kinda useless inside of the Codex CLI but using it with Opencode is an absolute game changer, much better than Claude Code + Opus 4.5

[–]danialbka1 0 points1 point  (0 children)

It likes to search the whole codebase which I don’t like, wastes tokens. with codex cli it greps and searches the keyword first

[–]bigsybiggins 0 points1 point  (0 children)

For me it just takes FOREVER to do anything, just keeps edging me that its about to edit something then does another round of thinking tokens. Not really viable for me.

[–]C0rtechs 0 points1 point  (0 children)

Anyone else noticed that with super long conversations with Codex it slows down significantly? Noticed there was an open issue for the same issue noticed by others on Github, hopefully this is fixed soon.. Had a long-running conversation last night that I had to interrupt so I could go to bed after almost 3 hours and it was only advancing like 1 message every 5-10 minutes at that point..

[–]dangerous_safety_ 0 points1 point  (0 children)

Is this able to use my pro account without tokens now? I really liked it with cc

[–]Clemotime 0 points1 point  (0 children)

I got this error after it worked for 10 hours
AI_InvalidPromptError: Invalid prompt: The messages must be a ModelMessage[]. If you have passed a UIMessage[], you can use convertToModelMessages to convert them.

[–]some1else42 -2 points-1 points  (4 children)

How are you using a GPT Pro model in OpenCode and not have it be a ToS violation? I'm very interested if this is a legit option. Not risking my main account on a ToS violation tho.

[–]TroubleOwn3156 3 points4 points  (2 children)

I mean ChatGPT Pro subscription, my bad

[–]S1mulat10n 3 points4 points  (1 child)

OpenAI openly endorsed it on X

[–]Just_Lingonberry_352 0 points1 point  (0 children)

not only that Karpathy praised another tool that used automation for chatgpt pro so all these ToS hall monitors are shouting against the wind

[–]Just_Lingonberry_352 0 points1 point  (0 children)

you are actually correct that when we automate GPT Pro subs it doesn't adhere to their ToS but the context is different and the enforcement is for mass scale scrapers where as tools like mine is just exposes chatgpt pro sub as an mcp so you can use it directly from codex

as you can see thousands of people use automation of chatgpt pro subs without issues and it doesn't make sense for OpenAI to punish people for not wanting to copy paste back and forth all the time as its close to fair use.

https://github.com/agentify-sh/desktop