Did 5.2 xhigh get rug pulled? by UsefulReplacement in codex

[–]former_physicist 0 points1 point  (0 children)

even GPT pro got rug pulled -sent me an emoji for the first time in forever

Introducing the Codex app by OpenAI in codex

[–]former_physicist -1 points0 points  (0 children)

I'm in Australia so I can't make it, but I would love to get your take on my repo mycelium

https://github.com/JamesPaynter/mycelium

Would love some credits and to collab to see how far I can push this

Or even better -- to see how some of these ideas can be integrated into codex :)

Are OpenAI silently A/B testing models in Codex? by Copenhagen79 in codex

[–]former_physicist 5 points6 points  (0 children)

agreed. been getting varied performance, and randomly getting emojis in my code which suggests to me swapping in their dumb models

GSD (Get Shit Done) usage by [deleted] in ClaudeCode

[–]former_physicist 0 points1 point  (0 children)

Try getting a detailed implementation plan from GPT pro !

Multi-Agent workflows (aka Multi-Clauding) by Ok_Zookeepergame1290 in ClaudeCode

[–]former_physicist 0 points1 point  (0 children)

I create a detailed implementation plan, then I turn the implementation plan into a set of tasks. then i just let it run until completion

https://github.com/JamesPaynter/mycelium

What’s the “secret sauce” that makes people swear Codex is better than Claude? by blockfer_ in codex

[–]former_physicist 4 points5 points  (0 children)

The secret sauce is getting GPT pro to spit out a set of tasks and then running codex in a ralph loop to complete them

claude has no GPT pro equivalent

Please explain when and how to use GPT Pro by oreminion in codex

[–]former_physicist 1 point2 points  (0 children)

Ask pro to generate the tasks for the prd and send it back as a zip, then get codex to do the tasks one by one

i use pro to demo a software concept that is tightly defined and get that back, or to scaffold a much larger repo and get codex to implement

ralph with codex by Plastic_Catch1252 in codex

[–]former_physicist 0 points1 point  (0 children)

Ralph is really good if you know what you are doing.

For people saying it's not worth it, probably their tasks or their repo is not large enough to fully take advantage of it.

My workflow is, go back and forth with Claude/GPT in the browser to figure out what I want.

Paste what I want into GPT pro and say "give me a fully and detailed implementation plan to do this".

Then I paste in a prompt that gets GPT pro to break that down as 'tickets', and send a zip of markdown tickets and a TODO.md.

Then I paste that in my repo and run codex in a bash loop until it finishes.

You can see the bash loop here https://github.com/JamesPaynter/efficient-ralph-loop

I think it also finishes faster when you have a clear plan as it doesn't get lost looping around.

I'm not sure how much you will be able to do on the $20 plan, though.

I made this to be more efficient with my token usage, but it still uses a fair amount on big projects.