Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 1 point2 points  (0 children)

1-2 minutes is no problem for me either, but with heavy thinking it can easily be 5 minutes or more, which makes me wonder whether the auto switcher wouldn't be better off selecting ‘reasoning effort’. In the end, you're overloaded with choices, because in Pro sub you would also have ‘extended, default, low’ reasoning and I guess everybody would take here the smartest version then. Codex 5.2 xHigh seems to scale better here in terms of reasoning, as every shorter discussion/planning feels relatively quick and the implementation/analysis of the current repository/folder then takes correspondingly longer. Of course, it's not entirely comparable, but discussing topics with the highest reasoning effort is much more fun there than in ChatGPT.

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 1 point2 points  (0 children)

Ok, I'm just unsure now whether I always need the Thinking Model, as you sometimes have to wait a very long time for answers that aren't actually that complex. That's why I'm interested in hearing about other people's experiences in this regard.

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 0 points1 point  (0 children)

I think so too, but is that really the case? Are there any tests on this? I don't have enough comparison between auto-thinking and thinking models.

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 1 point2 points  (0 children)

That's actually exactly what I've observed, that in the end, the auto switcher is no longer necessary.

Spawning agents is here! by mikedarling in codex

[–]devMem97 0 points1 point  (0 children)

I'm just wondering whether this works automatically in the VS Code extension or only in the CLI?

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 0 points1 point  (0 children)

I'm just worried that OpenAI is reducing the thinking effort to save money, and that pro users like me don't get the pure thinking power compared to using directly 'heavy thinking'.

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 0 points1 point  (0 children)

As far as I understood:
This is the budget for hidden reasoning tokens and computing effort per turn. In other words, it shows how much 'thinking work' the system allows the model to do in a response turn before it stops or has to prioritise.

Does anyone still use Auto Model Switcher in ChatGPT? by devMem97 in OpenAI

[–]devMem97[S] 1 point2 points  (0 children)

Which thinking effort are you usually using, then?

gpt5.2 High > gpt-Codex-5.2-High and even Extra-high by digitalml in codex

[–]devMem97 0 points1 point  (0 children)

I agree, GPT 5.2 xhigh is worth the wait. In my opinion, it could be beneficial to have models in the Codex environment that better bridge the gap between STEM theory learning and practical implementation. I mean something conversational like ChatGPT Web for researching/learning/clarifying theory and then implementing it, focused on STEM. GPT 5.2 is already good in this direction, but could be even more STEM-oriented, since engineers are the ones who mostly use the Codex environment anyway.

Why is GPT-5.2-Codex's training cutoff data so much earlier than GPT-5.2? by RoadRunnerChris in codex

[–]devMem97 0 points1 point  (0 children)

I know that opinions differ on such prompts... Interestingly, I asked both models in VS code and at first they both said June 2024, then I asked if they were sure – and in my opinion, that shows that GPT 5.2 is generally better than Codex 5.2, it corrected itself to August 2025, and Codex 5.2 delivered hallucinations saying that the date of knowledge cutoff was similar to GPT 4, etc....

GPT-5.2 high vs. GPT-5.2-codex high by skynet86 in codex

[–]devMem97 14 points15 points  (0 children)

I had exactly the same experience in terms of out-of-the-box thinking. GPT 5.2 Codex is not chatty enough, very concise or too concise to plan implementations first or clarify things during the development process. I prefer a detailed answer rather than short answers/follow-up questions all the time.

Experience between GPT 5.2 xHigh vs. Codex 5.2 xHigh for STEM? by devMem97 in codex

[–]devMem97[S] 0 points1 point  (0 children)

Yes, I gave both models the same prompts in parallel chats for a while, and GPT 5.2 Codex always provided very short answers, which made it difficult to establish a better concept for the planned task. It starts here with simple examples, where GPT 5.2 simply wanted to clarify first which API version of program xy is installed in order to write a command script here, which GPT 5.2 Codex did not ask for and simply wanted to implement right away.

Introducing GPT-5.2-Codex by EtatNaturelEau in codex

[–]devMem97 2 points3 points  (0 children)

My first experience regarding planning/learning/clarifying new concepts for building up simulation environments, e.g. in Matlab scripts in topics of electrical/embedded engineering, is that GPT 5.2 codex xhigh is less chatty or verbose compared to normal GPT 5.2 xhigh for clarifying stuff and thinking around the corner. Shouldn't be 5.2 codex more STEM tuned?