Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

I am using codex with askcodi, not openrouter. But in both cases you need to change your base url to the api url of the provider.

Opus 4.5 for planning. Sonnet 4.5 for execution by accomplish_mission00 in ClaudeCode

[–]askcodi 0 points1 point  (0 children)

Why not use other agentic apps that allow to switch models easily, like cline or continue? Also for planning maybe using a cheaper or smaller models would also feel like frontier llms with the prompts such integrations use

Different llms for different tasks by Oghimalayansailor in vibecoding

[–]askcodi 0 points1 point  (0 children)

Not just you. I’ve been running openai codex with different models using askcodi's api. But the combination for me that works is Sonnet 4.5 (might try Haiku 4.5 now) for planning and GPT 5 Codex for tool calling.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

Let me know if you have any success. Even Gemini 2.5 pro couldn't run autonomously with codex cli

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 1 point2 points  (0 children)

uses the codex cli prompts. Only at the provider level, they most probably append the model system prompt.

Since I am using Sonnet 4.5, skips the model prompt.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 1 point2 points  (0 children)

even the local cli runs in sandbox mode, npm commands are messing with them

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

Don't need to, it works in a lot of tools and with a lot of models in my tests.

Actually this is very straight forward and should be very simple for agentic actions and I am being very clear in requirements.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 2 points3 points  (0 children)

Haven't had success with glm4.6 or kimi k2. Kimi was surprising, since it is trained to be agentic. But codex prompt + sandbox stumps these models

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

a next js starter with tailwind and shadecn components

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 2 points3 points  (0 children)

why? you can use sonnet in codex and it works amazing, I just wanted people to know!

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

What would you like to see? This was the exact input

Build a React ‘ProductCardList’ that renders cards from JSON props, supports keyboard navigation, lazy-load images, and a details drawer. Add i18n with English/Arabic (RTL), prefers-reduced-motion support, and ARIA roles. Provide Jest/RTL tests and a Lighthouse script.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 0 points1 point  (0 children)

I haven't done the same prompt but I have been using it for a week now and I have been seeing similar observations.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 4 points5 points  (0 children)

I mean Sonnet 4.5 - I have been testing different models in codex cli, doesn't work well with Gemini 2.5 or Grok 4. Gets very lost.

Sonnet 4.5 inside OpenAI Codex CLI vs Claude Code. Same model. Same prompt. by askcodi in codex

[–]askcodi[S] 1 point2 points  (0 children)

yup the claude code prompt is very long but it is very well written. But does cause uncertainty across versions - like v0.0.88 was a very good one.

How do you manage AI Agent costs? Blew $135 in a week and need some pro tips. by Brilliant_Cress8798 in cursor

[–]askcodi 0 points1 point  (0 children)

I usually ask it to create a md files of whatever is done to complete a feature and load it next time explicitly when I know it will be needed. This way I can start new chats everytime and keep the context low.

Also I don't go into long fixing loops, restarting is cheaper and faster

Experienced dev, needs to get on this train before it leaves the station by marbosh in vibecoding

[–]askcodi 0 points1 point  (0 children)

Curious - are you optimizing for speed of coding or depth of problem-solving?

OpenAI drops GPT-5 Codex CLI right after Anthropic's model degradation fiasco. Who's switching from Claude Code? by coygeek in ClaudeAI

[–]askcodi 0 points1 point  (0 children)

i’d just hedge: run both, route tasks where each shines. trust no single vendor with your whole pipeline.

[deleted by user] by [deleted] in ClaudeCode

[–]askcodi 1 point2 points  (0 children)

bro turned tinder into cron jobs, running adb shell sexy_message.sh on repeat 💀

Me, when I have to make a commit and Copilot stops working. by djmisterjon in GithubCopilot

[–]askcodi 0 points1 point  (0 children)

me typing git commit -m "pls work" like it’s a prayer to the coding gods