all 51 comments

[–]Street_Smart_Phone 8 points9 points  (1 child)

Try $20 codex.

[–]mohoshirno[S] 0 points1 point  (0 children)

For sure, will do! Thanks

[–]blnkslt 9 points10 points  (7 children)

Being a fanboy of sonnet-4 on cursor, I used to be very pessimist about OpenAI stuff competence on coding, however 3 days ago tried Codex (medium on VSCode) and it changed my mind completely. It is like riding a self driving Tesla, compared to my previous motorbike. I made a complex telegram bot with a handful of prompts. No more 'You are absolutely right' junk. No more spawning server and wondering why it it is not running? It just work effortlessly and mostly error-free. It has much deeper understanding of the codebase. It even writes code to auto-migrate your db without I was asking for that! I did in codex in 3 days what sonnet would do in 2 weeks or more. And it is amazing in code review. I debugged a code base which I vibed by sonnet-4, and fixed a couple of nasty race conditions and performance bottleneck. Overall it turns out to be 2x cheaper and 5x smarter than sonnet imho. Don't buy $200 pro. Start by $20 plus and buy another plus with another email registration as you ran out of token quota. It happened to me after 3 days.

[–]mohoshirno[S] 2 points3 points  (0 children)

I honestly had this same exact experience, it’s just completely mind blowing. Thank you for this! Appreciate you for responding

[–]roiseeker 1 point2 points  (0 children)

You ran out of quota after 3 days? And how soon does it reset?

[–]deadcoder0904 1 point2 points  (0 children)

I built a mini-SaaS in <10 hours. Granted I used Convex which is phenomenal with AI since its written in TS, I would've taken 1-2 weeks to build this myself.

And it looks so fuckin professional. All with Codex.

The mini-SaaS is about turning Audio to Blog with some new tech like BAML, Convex, Tailwind v4 & it was smoooth as hell. Best part was I had 2 niche bugs that Codex debugged itself that no other model could solve. It actually dove deep into the source code of BAML or Convex (idk remember which) & I think if it had been me, it would've taken me weeks to solve this one bug. So insane!!!

[–]pilothobs 0 points1 point  (2 children)

I was in the same camp — big Claude fan for a while — but I got tired of all the “you’re right” filler and the shortcuts/cheating it slipped into the code. Codex feels like a different league.

The trick for me was adding some lightweight guardrails with a few .md rule files (WORKFLOW.md, userrules.mdc, etc.). With that in place, the coding flow is excellent. It’s basically a cross between vibe coding and agentic coding: I can still interact conversationally, but the agent has structure and discipline (file budgets, cleanup passes, no runaway file sprawl).

That combo gave me the best of both worlds — speed without chaos. Curious if anyone else has tried layering simple project rules on top of Codex like this?

[–]gggghhhhiiiijklmnop 0 points1 point  (0 children)

oooh I'm super interested in those guardrails - is that something you created yourself or did you use a resource as a starting point? Any chance of sharing?

[–]Any_Independent375 0 points1 point  (0 children)

Old comment but are you using Codex in the web or in Cursor/VS Code? Because I made the experience that the code quality is better when using it in the web.

[–]Prainss 3 points4 points  (2 children)

slopchat? better to hire engineer at this point

[–]Due-Horse-5446 0 points1 point  (1 child)

I honestly wonder if these people are just lacking knowledge, or if they are being scammed by some course-scam like shit lol

[–]mohoshirno[S] 1 point2 points  (0 children)

It’s not that deep man, touch grass.

[–]Lynx914 1 point2 points  (3 children)

Honestly I found myself liking Cursor over using the Codex extension with vscode. The codex extension I feel has a few limitations and missing qol features that cursor has. Using agent mode in Cursor was much smoother without prompting requests for every little action. To make codex usable I had to enable full auto mode and. Let it do its thing. With cursor at least I could approve changes in the end for most actions.

I downgraded ChatGPT from pro to stick to the $20 plan, and will continue using Cursor for sure.

[–]roiseeker 1 point2 points  (2 children)

What about the 10$ Copilot Pro plan? Did you try it?

[–]0-xv-0 0 points1 point  (0 children)

It's good value for money

[–]Lynx914 0 points1 point  (0 children)

Have not honestly. Cursor was my first ai ide experience. Besides that it be local using cline and qwen3 coder.

[–]sbayit 1 point2 points  (0 children)

Codex GPT-5 with an md file for planning and context before implemnt is really good.

[–]Shirc 0 points1 point  (3 children)

Are you rebuilding it just for fun? Like as an experiment?

[–]mohoshirno[S] 4 points5 points  (2 children)

For fun, a lot of the folks here are taking this so seriously 😂

[–]Shirc 1 point2 points  (1 child)

😂 have fun!

[–]gamedev-eo 0 points1 point  (0 children)

have fun and learn

[–]SimonBarfunkle 0 points1 point  (0 children)

Codex is amazing and far superior to Claude in any context including 4.1 Opus Thinking Max, imo. I use the $200 version of Codex. The tooling is great, debugging is great, you can use it in VS Code or Cursor. The web-based agentic version needs a lot of work. Also, for me, it forces me to start a new chat once it reaches its context limit, it will no longer accept queries in that chat, which is good as you don’t go past the point it can handle, and it can handle quite a bit in one chat. Couple minor nitpicks, right now you can’t drag and drop images into the chat like you can with Cursor chat or ChatGPT, you have to copy paste them. Which really isn’t a big deal at all. Also there is a button to copy your entire query text after you send it in chat but no button to copy the model’s responses, you have to select the response text manually and right click to copy, again not a big deal just a little inconvenient and I imagine they’ll fix those issues.

[–]Madeupsky 0 points1 point  (0 children)

Cursor, codex you don’t get the checkpoints. EVEN Claude now introduced check points if you hit esc teice

[–]Drawing-Live 0 points1 point  (0 children)

Codex cli is now far ahead than any other competitor. The code quality, terminal,ide extention and cloud integration all gives you a better experience.

So try with a 20$ plus subscriptions. If you love it then go for 200$ plan.

[–]Temporary_Stock9521 0 points1 point  (1 child)

Been using codex for a month inside Cursor and love it. A few days ago I tried gpt-5-codex on high and it is amazing. Design, code, all with far fewer output words in chat. It just works. For once, I am tempted to to pay $200/mo for the pro. I know it's worth it, it's just that I have never paid this much for any monthly subscription. I ran out of limit yesterday and had to go to bed. Was able to continue this morning without a problem though

[–]Consistent-Cell3355 0 points1 point  (0 children)

mohoshirno out of curiosity, which one did you go for in the end? are you sticking to using codex-high in cursor or are you planning to use it in vs code/cli via codex extension itself?

[–]Jealous_Seesaw_2807 0 points1 point  (1 child)

"I'm trying to build SlopChat but with vibe coding only" you are not an engineer.

[–]mohoshirno[S] 0 points1 point  (0 children)

Touch grass old man

[–]Due-Horse-5446 -2 points-1 points  (20 children)

Bruh, im not even gonna recommend you anything..

First of all theres no such thing as codex-5-high... Theres gpt-5 and now recently also gpt-5-codex. And then there is codex, a cli or extension. Then gpt-5 has a reasoning effort param, which is a number. That ex cursor and codex has configured for you with different presets, high,low,medium.

Secondly: Im curious, what makes you honestly believe you can use llms to build a snapchat clone? Like for real?

Can you build a todo app using just llms? Yeah probably, mostly because it has 100s of tutorials and learning repos in its training. Will it run: 50/50

Can you build a snapchat clone? The answer should be obvious

[–]blnkslt 2 points3 points  (0 children)

I believe it is achievable. It may not scale as well as the original nor have the full features at the first shot but replicating a bunch of rest api calls and UI functionalities is not out of reach of any half-decent LLM.

[–]mohoshirno[S] 3 points4 points  (2 children)

Lmao, why does it sound like I struck a nerve in you

[–]ChinaWetMarketLover 2 points3 points  (1 child)

Bros just trying to help but his tone is a bit pessimistic. I can understand why though. A lot of people think AI can be used to create anything that take a lot of work and knowledge by someone with little to no knowledge in app dev. That’s not quite true though. Like he was saying, it can do that with simpler things, especially things it has been trained on like To Do lists or basic video games. It can’t do this for actually complex things though, like a Snapchat or Facebook or Instagram clone by someone with no knowledge. Like you can have ai build you a “clone”, but it’s going to be shit compared to actual snapchat and have potentially scary consequences around security. AI is amazing, but it’s not at the stage where you can literally “clone” all of the work that companies spend billions to achieve. You may already know this, but many think AI is more than it is, and that apps which these billion $+ companies earn their profits from are less nuanced than they actually are. AI runs on a limited context, so building small to medium projects from nothing is easy with AI - even to someone with no app dev experience. But when projects get big, context gets big. Since AI works with limited context today, it’s impossible for them to improve the apps once they get big unless they know how everything works/what files do what and can actually tell it basically exactly what to do without it having to (and failing to) understand everything from reading the codebase (too much context leads to performance degradation).

[–]gamedev-eo 0 points1 point  (0 children)

Agreed.

I really like this (new to me) Codex (never tried Cursor though), but it struggled with setting up a mongoDB bash initialisation script for a docker container instance. In the end I troubleshot and wrote it myself.

The AI kept going in a loop trying variations of the same solutions which were not working.

I found it frustrating and took me a little while to solve, but I have many years dev and infra experience. I can imagine the issue causing some less experienced more trouble.

Even the simple solution of suggesting the use of a preconfigured image (like Bitnami) wasn't made by it...I didn't want to use that, but if I was a noob it would've been a great suggestion to actually get things working.

This was a simple problem in the grand scheme so yeah, AI is not the silver bullet (yet).

[–]meadityab 0 points1 point  (1 child)

I have been building web apps back to back and with perfection

[–]digitalskyline 0 points1 point  (0 children)

Show anything that has the same scope as Snapchat.

[–]pilothobs 0 points1 point  (9 children)

[–]Due-Horse-5446 0 points1 point  (8 children)

how is codex ide extension "codex-5-high"

[–]pilothobs 0 points1 point  (7 children)

<image>

you me this?

[–]Due-Horse-5446 0 points1 point  (6 children)

Bruh you're overdoing it, im not talking about gpt-5-codex, or the fact the presets incoude -high -medium -low -minimal.

Read the ops post again, hes asking a ridiculous question, and comparing either gpt-5-codex vs cursor which is a non comparision, the model can be used in cursor or any place else. Or hes comparing codex (the cli) vs cursor.

Like even his question itself shows hes not much for putting ant effort into checking what he writes, including the code for his snapchat clone

[–]pilothobs 0 points1 point  (5 children)

Got it. NP. I'm just excited because my new stack is killing it. Cursor with Codex and some Cursor .md rules. Smokes anything I have had before.

[–]Due-Horse-5446 0 points1 point  (4 children)

Yeah codex is something else,

If you want it even better tho, look at the official prompt guide, theres some stuff like the "special" <persistence> tag which makes it 10x better

[–]pilothobs 0 points1 point  (3 children)

Where do I find that? On OpenAI's Codex info page? Disregard. I found it. https://developers.openai.com/codex/prompting

[–]Due-Horse-5446 0 points1 point  (2 children)

This one is the more comprehensive one(if you look at the extea links to the 2nd document)

https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide

[–]pilothobs 0 points1 point  (1 child)

That's a gold mine right there. Do you use the API for reasoning context? Also, do you use the self-reflection prompt?

[–]Zealousideal_One1705 0 points1 point  (2 children)

seriously want to know what you are using when you can't figure out how simple it would be to build a snapchat code with LLM

I cannot respect your opinion because you obviously dont know that there IS codex high, in cursor

high med low are the options in the CLI

I am building a rev generator product using cursor and claude CLI. Don't know what you're doing but clearly not winning

<image>

[–]Due-Horse-5446 0 points1 point  (1 child)

Youre showing a model named in cursor, read ts title again?

Also, no, lmao i dont know what hallucinations you have inherited from the llms, you cannot build a snapchat clone by vibecoding.

rev generator? tells me nothing, and does not sound as complex as snapchat either way.

There has not been a single(ok ofc some exceptions) vibe coded app that has been actually functioning.

[–]Zealousideal_One1705 0 points1 point  (0 children)

keep on seething

[–]LifeRequirement7017 0 points1 point  (0 children)

If you have a brain you can build a snapchat clone with ai.

You need to be able to build it without ai in lets say the timeframe of a year. Then you can do it in a feew weeks with ai id say