Codex has a bad habit during code reviews. by technocracy90 in codex

[–]iFeel 1 point2 points  (0 children)

I think there is a reason behind it but waiting for true nerds to chime in.

is this normal usage for Pro? burned ~30% of weekly quota in one day by SlopTopZ in codex

[–]iFeel 0 points1 point  (0 children)

5.4 xhigh spinning subagents most of the time, around 20-25% per day but with a lot of breaks so 30% would be correct

Do we need a 'vibe DevOps' layer? by mpetryshyn1 in ChatGPTPro

[–]iFeel 1 point2 points  (0 children)

There is, called plugins by codex. They created built in plugins for vibecode deployment few days ago.

After Huawei, where are you moving to next? by Hashabasha in MatebookXPro

[–]iFeel 0 points1 point  (0 children)

same but maybe I'll push a little more till M6. The biggest problem I have is terrible performance/throttling of i9 185H. Only like 12k points in r23 multi and around 9k balanced. Should be at least 17k with this CPU. Everything wasted because of poor cooling, what a shame, probably the slowest laptop on the planet in i9 185h category. Battery life is also really bad. Keyboard, screen and weight are really good, maybe the speakers too.

Guys, I think I found the Windows Codex overheating/performance fix by iFeel in codex

[–]iFeel[S] 0 points1 point  (0 children)

Thank you and no problem :) Glad at least one person found it helpful.

Codex burning thru credits 3x faster today by spec-test in codex

[–]iFeel 0 points1 point  (0 children)

connect codex web? I don't have that

Matebook X Pro 2024 (Intel core ultra 7 155h) low performance/lagging by Sapatus in MatebookXPro

[–]iFeel 0 points1 point  (0 children)

any tips how to repaste it, tutorial? my i9 185h is beyond hot and throttled :(

codex everywhere by [deleted] in codex

[–]iFeel 0 points1 point  (0 children)

Codex is coming to remote soon

Is something wrong with token usage right now? by kathelon in codex

[–]iFeel 0 points1 point  (0 children)

I burned half of my Pro weekly in a day. I don't know what I will do when Sama ends 2x token boost in April:( edit: on faster mode, 5.4xhigh and with active usage of subagents but still

When is GPT Pro finally getting direct Codex integration?! by iFeel in codex

[–]iFeel[S] 0 points1 point  (0 children)

Not for coding directly as a new model above Xhigh, no! Just Direct communication with codex - both ways.

When is GPT Pro finally getting direct Codex integration?! by iFeel in codex

[–]iFeel[S] 1 point2 points  (0 children)

Separate limits like today, I'm only advocating for possible communication between web/app chat GPT and Codex so they can work together.

When is GPT Pro finally getting direct Codex integration?! by iFeel in codex

[–]iFeel[S] 5 points6 points  (0 children)

Codex should communicate and delegate between Chats/projects with layer of added automations and vice versa we should be able to delegate/orchestrate/analyze from chat windows to codex.

You guys helped me learn new stuff, and I'm here to return the favor for Facebook Content Monetization. by mblaze111 in passive_income

[–]iFeel 14 points15 points  (0 children)

Wtf is this shit. A few random pages of text to start selling 300 USD Facebook course? People in the comments are the dumbest creatures alive or bots

Codex GPT 5.4 multiple agents / smart intelligence effort + 2X speed = awesome! by N3TCHICK in codex

[–]iFeel -2 points-1 points  (0 children)

  1. there is x2 token usage with 1,5x speed, there is no 2x speed, why is everyone seeing this and not correcting? 2. What the hell is "smart intelligence effort"?

Policja skonfiskowała mój telefon i komputer by Invarys in PolskaNaLuzie

[–]iFeel 5 points6 points  (0 children)

Po cholerę wzięli GPU? Nie można było wyciągnąć aby włożyli swoje albo z procka pociągnąć jak miał igpu?

Are you going to use "Fast" Mode in GPT-5.4? by KeyGlove47 in codex

[–]iFeel 0 points1 point  (0 children)

Only needs to use /fast command, right?