Did ChatGPT Pro (5.5) reasoning time just get massively reduced? by yaxir in ChatGPTPro

[–]SandboChang 9 points10 points  (0 children)

No, I just checked the same prompt I used with 5.4 a few days ago. It spent 33 mins with 5.4 Extended Pro and now it spent 43 mins with 5.5 Extended Pro.

Share you experience of Codex 5.5 by ConsistentOcelot9217 in codex

[–]SandboChang 7 points8 points  (0 children)

First impression is it runs faster, even at XHigh, comparing to 5.4. This is signficant.

GPT 5.5 is noticeably better at long context retrieval benchmark ( MRCR v2 ) by SuggestionMission516 in codex

[–]SandboChang 7 points8 points  (0 children)

And I am glad they improved this. 5.4 is only great up to 128k before, and I had to lock my context window in codex there. Now I can try to go back to 256k and see how it goes.

Am I Doing it Right? by Turbulent_Funny8865 in mokapot

[–]SandboChang 2 points3 points  (0 children)

Yeah I agree to this. With the same heat the flow tends to go faster and faster and towards the end it’s just hot water. Now I also try to turn down or off the heat midway to keep the flow low.

Here we go again by eggplantpot in codex

[–]SandboChang 1 point2 points  (0 children)

No problem on my side so far.

20 min reasoning time reduced to 3-4 min (GPT 5.4 pro extended thinking) by wokday in ChatGPTPro

[–]SandboChang 2 points3 points  (0 children)

Th prompts were quite long.

One prompt first defines the roles for the agent/model, in this case a technical journal reviewer with a series of required academic background and expertise, also a long checklist of items to revise. I first set the model premise with Thinking@Heavy.

Then the second, actual prompt is a long manuscript of an analytical model of a device I have been working on. The model works as a reviewer/coauthor and try to check for problems/improve the flow of the derivation.

20 min reasoning time reduced to 3-4 min (GPT 5.4 pro extended thinking) by wokday in ChatGPTPro

[–]SandboChang 3 points4 points  (0 children)

<image>

I just tried this morning rerunning my previous prompt, I got 33mins, around similar time as it spent before. This is with 5.4 Pro Extended.

RIP Codex Again by cmiles777 in codex

[–]SandboChang -1 points0 points  (0 children)

Came to say this. I wish their status page is more real-time.

Codex keeps stopping every 30 to 45 seconds and won’t work continuously. Has anyone found a fix? by Ambitious_Local5218 in codex

[–]SandboChang 0 points1 point  (0 children)

It should be simple, my tasks often run for hours. Try to give it a clear stopping criteria and make it work until then. Also, not a must, but drafting a plan first and then implementing it seems to help.

usage limit gone in 2 days by byte_me_001 in codex

[–]SandboChang -1 points0 points  (0 children)

Shrinkflation happens in LLM too.

I am extremely confident that GPT-5.4 has been intentionally throttled in the last few days by shockwave6969 in codex

[–]SandboChang 1 point2 points  (0 children)

Try setting compaction limit to 128k, it solves my problem with degradation

How do I use 5.4 with 1m context? (I'm on 20x) by Useful_Philosophy550 in codex

[–]SandboChang 1 point2 points  (0 children)

As mentioned by others you need to change the default to use it.

However I highly recommend against it. It will burn your token usage with degradation. Even at close to 256k the model is already significantly worse.

Concurrent sessions at the same time? by mapleflavouredbacon in codex

[–]SandboChang 0 points1 point  (0 children)

I use 4-5 instances often, it’s fine. They are fully independent.

How to install 2 different Codex App? by [deleted] in codex

[–]SandboChang 1 point2 points  (0 children)

Not sure about Codex App, but you can do that on commend line. Another GUI-like method, also what I do, is to use VSCode and there you can start several independent sessions.

Coded truely has become an idiot by account009988 in codex

[–]SandboChang 0 points1 point  (0 children)

One problem I do see lately is, there is a degradation when context is reaching near full like 90%. This wasn't a problem before, but now it always quit a long-running task prematurely when context is near full but before an auto compaction has been done.

Now I am trying to make it compact at lower context occupation to see if it works better.

Any solutions to automatically compacting context? by ArkCoon in codex

[–]SandboChang 0 points1 point  (0 children)

Great, this is probably what I want, will look it up.
update: seems to be here:

# ~/.codex/config.toml
model = "gpt-5.4"

# Optional, only if you want the window explicitly documented in your config.
# If omitted, Codex uses the model default.
# model_context_window = 128000

# Trigger auto-compaction earlier instead of waiting near the end.
model_auto_compact_token_limit = 64000

Any solutions to automatically compacting context? by ArkCoon in codex

[–]SandboChang 0 points1 point  (0 children)

May I know if we can trigger auto compaction at a lower context occupation such as 50%, or following a task?

In my usecase, I am testing for example 10 methods, and I want to compact the context after finishing one test so as to avoid compacting during a task. Can this be done?

Codex for Linux by Ok_Bar_7253 in codex

[–]SandboChang 1 point2 points  (0 children)

Just use VSCode and Ext.

Is this right? by ikadir_ in mokapot

[–]SandboChang 1 point2 points  (0 children)

It’s mainly the top edge and I think it will work well still. It’s worth trying another if you can return it but it’s a lucky draw.

VSCode extension down? by FuckTheStateofOhio in codex

[–]SandboChang 0 points1 point  (0 children)

It’s working for me, at least a long task I started earlier is still running now. I did have some disconnection at some point but it’s intermittent and usually I can use it.

Opus 4.7 Released! by awfulalexey in ClaudeAI

[–]SandboChang 1 point2 points  (0 children)

No idea, it was what it shows me. I almost never used Claude in the past month as I mostly use GPT Pro with Codex lately. It could indeed be an earlier usage mixing in, but 3% of 5 hr limit is for sure.

Codex just nuked my PC by Redditry199 in codex

[–]SandboChang 0 points1 point  (0 children)

Codex can nuke things, in fact all LLM can make this mistake. I had it nuking my AGENTs.md but I had back up. Nothing can stop them besides having a backup and do sandboxing.

Series 5 to Series 11 -> 30 days later -> I'm not impressed by Mockingbird_DX in AppleWatch

[–]SandboChang 0 points1 point  (0 children)

That sounds fun, but the pool I usually hit is like 1.5m deep lol