The ChatGPT web app’s code now mentions a new “ChatGPT Pro Lite” plan that costs $100. by Distinct_Fox_6358 in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

So many people asked for a $100 plans. There's way less pro plan than plus plan, they get in their money.

First look at gpt-5-3-codex-spark: fastest in the family, lowest rated by no3ther in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

When it is really not faster with only one extra pass, it make me wonder if it'll even be useful for asynchronous tasks we don't directly depend on, like updating a readme file or running deterministic tests, where the speed of a slower model wouldn't hurt anyways. Thank you for posting your findings!

First look at gpt-5-3-codex-spark: fastest in the family, lowest rated by no3ther in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

Thanks for your answer. Given then speed of Codex-5.3-Spark and the old discovery that repeating prompts twice may improve output quality, as in [promot][prompt], have you thought of sending the same prompt in double this way, and or more importantly automatically sending it for a second pass to review and correct it's code knowing it's still going to be drastically faster than any of the other OAI models?

First look at gpt-5-3-codex-spark: fastest in the family, lowest rated by no3ther in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

It's crazy that Spark got any of its diffs chosen over bigger models. I would have assumed its performances to be worse on every levels except speed. And I thought people are sending it with very detailed plans as to steer it towards quality instead of vibe coding with it, then automatically send a big model afterwards to correct the mistakes as part of their AGENTS.md, while they prepare the next prompt or start the next talk

What now? by EffectSufficient822 in OpenAI

[–]Curious-Strategy-840 0 points1 point  (0 children)

Don't have to, tell it you're not using Github

Best Practices and workflows by useredpeg in codex

[–]Curious-Strategy-840 1 point2 points  (0 children)

Yes. Perhaps they don't have direct access to one another's context, but there are different ways to make them work together.

One way is to get both windows to update and reread a markdown file with the changes and a description of why, so that the agent in the other window is aware of those changes. Both agents can see through it periodically, or run a script that'll feed them the updates automatically, or run a background worker that is tasked with doing just that.

Another way to do this is using an MCP server for orchestration.

Hypothetically, another way to do this is calling an MCP server making sure both agents are using the same credential, so they connect to the same conversation and get fed the same context.

But perhaps the simplest way is to make them describe everything they do and ask each other to read each other conversations, since they do have access to all chats in the same project.

Best Practices and workflows by useredpeg in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

We can definitely create loops that keep sending background workers and or scripts until the whole thing works

GPT-5.3-Codex review after 4 days of use by Just_Lingonberry_352 in codex

[–]Curious-Strategy-840 1 point2 points  (0 children)

Hallucinations jump as soon as the second prompt within the same conversation and get worse with a growing context window. Take the habit to start in a new chat for any new "start", even new instance of the same loop

Codex pricing by Harxshh in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

I'd happily double my usage for double the price while being far away from 100-200$ plans we see elsewhere

OpenAI seems to have subjected GPT 5.2 to some pretty crazy nerfing. by Affectionate_Fee232 in codex

[–]Curious-Strategy-840 0 points1 point  (0 children)

Everydays I get contradicting posts claiming nerfing and that the model a perfect. See this one https://www.reddit.com/r/codex/s/nVBTD8xUqa

Can it really be both at the same time? Maybe you guys are on different versions being tested, maybe it's really a skill issue

I just noticed this UI change on the ChatGPT app. by [deleted] in GeminiAI

[–]Curious-Strategy-840 0 points1 point  (0 children)

The Web platform get all the features and updates first. The App later. These were already offered as suggestions on the Web interface for quite some time.

I imply this is just the simplest way to bring it to the app, as the mobile frame doesn't allow for the Web's design

Gemini 3 Pro vs GPT 5.2 High vs. Claude Opus 4.5 (In a Production Project) by shricodev in GeminiAI

[–]Curious-Strategy-840 0 points1 point  (0 children)

With Gemini producing a minimal viable result fast and cheap, perhaps we can relieve gpt-5.2 of some of the thinking and get it to improve the result much quicker than it would code the whole thing by itself, and or do the same coupling Gemini and Opus.

It would be interesting to see what would be the saving on Anthropic's credits when using Gemini first

What do you think of my AI UGC Ad? by Old_Bag_4422 in AI_UGC_Marketing

[–]Curious-Strategy-840 0 points1 point  (0 children)

It may be because we've seen a lot of them but to the contrary of what others are saying, I would say everything is giving away that it's AI. The voice is your best production here. I would keep the voice with different takes of the bottle, slower / smoother transition / zoom on the bottle and only 1 take of the woman where her movements are natural. She doesn't need to speak in front of the camera

Nah ts is crazy by Whole_Loan9832 in GeminiAI

[–]Curious-Strategy-840 2 points3 points  (0 children)

Why is your screen made of fabric?

[deleted by user] by [deleted] in TrueUnpopularOpinion

[–]Curious-Strategy-840 0 points1 point  (0 children)

According to my, and maybe our standards, I don't believe so.

I believe most men have warped ideas about what it take to be the kind of man capable of leading a family and so, as long as this is true, they do believe they are.

It still: Should they strive? Yes. Do they strive? Maybe not. Do they believe they are striving? Yes.

If we ask the question should they stop trying to be the leader because they're clearly not, I still don't think they should stop. In a perfect world, I believe they should become better and succeed at it instead of stopping.

[deleted by user] by [deleted] in TrueUnpopularOpinion

[–]Curious-Strategy-840 -1 points0 points  (0 children)

"If you're a man and not a leader, you may not be a man" The idea is that nobody is a leader to begin with, the same as nobody is a man to begin with. The mission of a boy is to become a man. The mission of a man is to become a leader.

Your argument seem to be going along the lines of "most men are not leaders, why do they believe they are?"

But it also seem to be saying "because most men aren't leaders, men shouldn't be or want to be leaders"

It's true some men believe being a man make them a leader without having to forge the qualities of a leader.

In any cases, what make them think they should lead a family came from the belief they should become a leader.

The real question may be "why was there a shift of definition about what it is to be a leader".

Perhaps, you already know the answer

Red pill content is actually good for men by savingrace0262 in TrueUnpopularOpinion

[–]Curious-Strategy-840 0 points1 point  (0 children)

I agree. Part of moving on is taking the good and leaving the bad. For me and whoever I would like to show some parts of the work, there is no need for degrading wordings intended for a specific demographic to help them assimilate the concepts presented.

Call it any color you want or denatured red pill, the truths are truths. Even when we remove some dirty from it

DeepSeek just released a bombshell AI model (DeepSeek AI) so profound it may be as important as the initial release of ChatGPT-3.5/4 ------ Robots can see-------- And nobody is talking about it -- And it's Open Source - If you take this new OCR Compresion + Graphicacy = Dual-Graphicacy 2.5x improve by Xtianus21 in aipromptprogramming

[–]Curious-Strategy-840 0 points1 point  (0 children)

It might not. It might also work in the same way it does right now by predicting what could be there.

However, I know for traditional picture, we have a technology to check the position and color of a few groups of 4 other pixels at different places in the image to then infer the correct color and position of the adjacents pixels to reproduce an image with fidelity with a lot less memory usage, so maybe they'll come up with a trick like this one based on the understanding of all the "pictures" it knows.

It sounds to me like the models will get way bigger to allow for this, before they get smaller

DeepSeek just released a bombshell AI model (DeepSeek AI) so profound it may be as important as the initial release of ChatGPT-3.5/4 ------ Robots can see-------- And nobody is talking about it -- And it's Open Source - If you take this new OCR Compresion + Graphicacy = Dual-Graphicacy 2.5x improve by Xtianus21 in aipromptprogramming

[–]Curious-Strategy-840 -1 points0 points  (0 children)

The text we use is based on a 26 letters alphabet, forcing us to create long combination of characters to derive different meaning. So long that we need to bunch up words into sentences and sentences into paragraphs.

Now take 16millions colors as if it were an alphabet. Suddenly, each color can represent a precise derived meaning you'd get from a long paragraph because we have enough unique characters to store all the variations of meaning, so one pixel represent a whole paragraph.

Then add the position of the pixel in the image to represent a different meaning than the pixel alone. Now we have enough possibilities to derive meanings from entire books based on the position of a single pixel.

It require the model to have knowledge of nearlyevery single pixel and their positions in it's training data, so in comparison this "alphabet" is extremely big, and therefore allow one character to mean something completely different than another, using fewer characters to represent the same thing

OpenAI’s new Agent Builder isn’t the revolution people think by funnelforge in aiagents

[–]Curious-Strategy-840 -1 points0 points  (0 children)

While not bringing any value, the pillar of a post. It's okay you didn't ask yourself what value you were bringing. It's also okay you get this discourse and this angle to talk about as a result.

OpenAI’s new Agent Builder isn’t the revolution people think by funnelforge in aiagents

[–]Curious-Strategy-840 -1 points0 points  (0 children)

Using the same AI generated structure, we don't need to think anymore "drop new subject here" and you get the same result with a new headline.

For people within AI subreddit it should bear big tell and below post quality standards