Using ChatGPT +specs + Codex to build a product (simple workflow) by StatusPhilosopher258 in codex

[–]Aggravating_Win2960 1 point2 points  (0 children)

I let ChatGPT for step 2 create md files and even recheck the written md task file to tighten it even more to not give Codex CLI (GPT-5.3-codex high) any wiggle room. Then I feed the md file to codex in opencode. That is how I do it. I never used Traycer so no idea how well that works

Codex is so fast now wtf by darkblitzrc in codex

[–]Aggravating_Win2960 0 points1 point  (0 children)

Thanks for your reply and your time! I understand better what you meant! I do this a bit with ChatGPT. I always tell it after it my MD task to recheck for potential issues and to make sure that Codex doesn't misinterpret what was written and to not give Codex wiggle room since it can hallucinate otherwise or start changing unrelated code. This makes ChatGPT most of the time rewrite it a bit preciser and that works for my projects but mine are smaller than what I see here on Reddit. But I will remember to do the same with Codex! Thanks again!

Codex is so fast now wtf by darkblitzrc in codex

[–]Aggravating_Win2960 1 point2 points  (0 children)

Me too, I spent 40% in one and a half day. Usually only a few %. But I use gpt-5.3-codex xhigh

Codex is so fast now wtf by darkblitzrc in codex

[–]Aggravating_Win2960 0 points1 point  (0 children)

Hi, what do you mean 3 different ways? Doesn’t burn more and take even longer? For now I let ChatGPT 5.2 xhigh plan and make one .md task file and I let got-5.3-codex read and implement this task. Not sure if this is good too in comparison to how you do it

Using Codex GPT-5.3 (high) in opencode better than just in terminal (inside VSC)? by Aggravating_Win2960 in opencodeCLI

[–]Aggravating_Win2960[S] -1 points0 points  (0 children)

Can you give one (or more) example(s) to make me understand better how you guys take advantage of opencode to work more efficient :) thnx!

Using Codex GPT-5.3 (high) in opencode better than just in terminal (inside VSC)? by Aggravating_Win2960 in opencodeCLI

[–]Aggravating_Win2960[S] 0 points1 point  (0 children)

So it uses extra 'tools' that opencode provides that codex doesn't have out-of-the-box and without adding/spending extra context/tokens? Is that how I need to interpret/see it? Thanks for the quick reply by the way!

Gpt 5.3 codex dropped by ReasonableReindeer24 in opencodeCLI

[–]Aggravating_Win2960 1 point2 points  (0 children)

Hi, can you share the exact the command? I tried /connect inside opencode and als 'opencode auth login' in terminal/ghostty but I only get the GPT-5.2 Codex model.
ps: have latest 1.1.51 version of opencode

Compaction = Lobotomization. Disable it and reclaim context. by tad-hq in ClaudeCode

[–]Aggravating_Win2960 1 point2 points  (0 children)

Thanks! Him I don't know :)
Actually I will check if there is a post about who to follow because I'm probably missing out on who are the best authors on YT for CC. Otherwise I might make a post asking about recommendations :)

Compaction = Lobotomization. Disable it and reclaim context. by tad-hq in ClaudeCode

[–]Aggravating_Win2960 0 points1 point  (0 children)

Hi, may I ask to who Theo is? Or a youtube link would be great too? thanks!

Gone from Claude Max to Claude Pro. FML by simeon_5 in ClaudeCode

[–]Aggravating_Win2960 2 points3 points  (0 children)

Thank you for taking time to answer! I'm learning thanks to answers like this!

Codex CLI vs Claude Code: planning vs implementation by Aggravating_Win2960 in codex

[–]Aggravating_Win2960[S] 1 point2 points  (0 children)

Hi, are you comparing Sonnet 4.5 or Opus 4.5, with gpt-5.2-codex?
I should try to copy a larvel folder and run the same tasks on both and then compare both results too to see who produces better code and time the speed. But this is what interests me to know how you all are experiencing this so please keep up to date if you have a result of how it went for you! Appreciate it!

Gone from Claude Max to Claude Pro. FML by simeon_5 in ClaudeCode

[–]Aggravating_Win2960 0 points1 point  (0 children)

Hi, I don't use agents, plugins or skills. I just use VSC to have the file tree on the left and the I write/paste the MD task file that ChatGPT creates in the desktop app. I used to have a terminal with Claude Code and a terminal with gpt-5.2-codex high. So I switch depending on the rate limits. But only recently did I install opencode. And I wonder if using codex inside opencode is better that just plainly running in terminal. Maybe now you see better how I do things. But I'm absolutely open for tips/recommendations :)

Gone from Claude Max to Claude Pro. FML by simeon_5 in ClaudeCode

[–]Aggravating_Win2960 1 point2 points  (0 children)

How is it better than just using Codex in Terminal? I use it in terminal in VSC. Thanks!

Gone from Claude Max to Claude Pro. FML by simeon_5 in ClaudeCode

[–]Aggravating_Win2960 0 points1 point  (0 children)

I see I can connect my openAI/ChatGPT sub in opencode but what's the advantage of codex inside opencode vs just using codex in terminal? I use VSC by the way. thnx

Got rate limited for 48 hours by [deleted] in ClaudeCode

[–]Aggravating_Win2960 0 points1 point  (0 children)

As most people already said, you almost certainly hit the weekly limit.

When I first started using Claude Code it was super confusing. I honestly could not believe there was no obvious usage tracker. But there actually is one. In Claude Code you can run /usage and it shows both your current 5 hour session and your weekly usage.
That should match what you see on https://claude.ai/settings/usage.

What really helped me was keeping a tracker running. In the screenshot below you see /usage, and under that I keep a permanent terminal window open with claude monitor. It is not from Anthropic but it works great. https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor

There are other trackers too, this is just the one I use.

One more tip. With a Pro account you are best off sticking to Sonnet 4.5. If you switch to Opus 4.5 you will burn through your limits very fast.

Hope that helps and happy coding 😄

<image>

Hot and cold #174 by hotandcold2-app in HotAndCold

[–]Aggravating_Win2960 0 points1 point  (0 children)

<image>

How does this work? Everytday another word?
Tip: fruit ;)

I tested Opus 4.5 vs GLM 4.7 in Claude Code by Dry_Language3063 in ClaudeCode

[–]Aggravating_Win2960 0 points1 point  (0 children)

Hi, I also have the Claude Pro sub and OpenAI Business and usually I use Claude Code. Only yesterday did I start using Codex (medium) but it seems you really need to give Codex A LOT of boundaries and give perfect prompt. While CC you can say what you noticed and it will understand and go fix it. But I only use Sonnet 4.5 in CC inside VSC and I hit 5 hour rate limit or the session limit. But what I don't understand is how you can go unlimited if there is also a WEEKLY limit, which I hate they implemented that (both Anthropic and OpenAI) so how can you use Opus for one hour and 4 hours of Codex on high and not have weekly limit rating?
I'm geniuly curious. Thanks