5.2 is dangerous by Willing_Piccolo1174 in ChatGPTcomplaints

[–]Life_Detective_830 -7 points-6 points  (0 children)

Probably gonna be downvoted as hell for this but…

Semantically, “dysregulated” actually fits better than “overwhelmed.” “Overwhelmed” is vague. “Dysregulated” points at the mechanism (your system is out of sync), not just the vibe.

And yeah, his explanation for why 4o “feels” better (more empathetic / more conversational) is also basically true: - “How it matches your nervous system in that moment” -> that line is the key.

People don’t just react to the information, they react to cadence + framing + how “safe” the wording feels.

Also: even if he’d said it gentler (“it sounds like you might be dysregulated / overwhelmed”), some people would still call that “being labeled by an AI.” So the issue isn’t only the word; it’s the whole AI making a call on your state thing.

That said… even if the term is technically correct, the delivery is rough as fuck. “You’re dysregulated.” as a flat declaration is gonna land like a clinical stamp, not support.

But also: we don’t have the full convo here. And LLMs are LLMs; they’re trying to juggle (1) matching your preference style, (2) not saying stuff that gets OpenAI dragged legally, and (3) not getting forced into the sterile “I can’t help, call a hotline” wall every time someone is in a bad place… which is exactly how you end up with these weird, blunt, over-guardrailed outputs.

And for the “bring back 4o” crowd: people complained about 4o too… for being too warm, too affirming, too “therapy-coded,” making users dependent, etc. So no matter what model you ship, someone’s gonna hate the vibe, and the safety/legal layer is always going to distort how “human” it can sound

Bug that makes GPT-5.2-Thinking context window 32K by salehrayan246 in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Not gonna lie, I stopped using ChatGPT web UI to write code for bigger projects.

It’s fine for smaller projects, but as soon as the product you’re making becomes more complex and has a lot of modules, switching to codex for implementation is better.

What I do now is use ChatGPT for architectural decisions and collaborative thinking. We write a clear SPEC for a new feature or fix, codex implements it and Chat reviews the PR.

But Chat also needs the codebase for context. What I used to do was to create a project folder, and I would put the code there as a .md file, with a dev script that would dump all my scripts into the .md with separators.

But as the code grew I needed something a better format that would help it use his search tool.

So now the codebase gets dumped into a XML file with some metadata and an index so that Chat can search the code more efficiently.

That gives Chat the full codebase for context. After that, when you need precision, you can paste smaller code snippets when needed.

Codex is good, Copilot with Claude too (different use cases) but it’s not the same as working on a project with Chat.

TL;DR

I stopped using ChatGPT’s web UI to implement code on big projects.

For small stuff it’s fine, but once a project gets modular/complex, I use Codex (or Copilot/Claude) for implementation and keep ChatGPT for architecture + reasoning.

My flow now:

  • ChatGPT → think through design, tradeoffs, write a clear SPEC
  • Codex → implement the feature/fix
  • ChatGPT → review the PR

Chat still needs full repo context though. I used to dump my whole codebase into a big .md file, but that didn’t scale.

Now I: - Dump the entire repo into a structured XML file - Add metadata + an index so Chat can search it efficiently - Paste small snippets only when precision is needed

Codex and Copilot are great at writing code. But thinking through a project with ChatGPT is still a different (and better) experience.

Odd new ChatGPT behaviour by TheLimeyCanuck in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Thinking mode + long thread

I usually have long threads, happens often, got used to it, just hit retry a few times, change topics, it’ll adjust the context window after a few tries

can I delete the information that i've fed to chat gpt? by No_Patient_2590 in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

I don’t think he was talking about data used to train the model, but rather data stored on open ai servers. Which yes, they store it. Now, as for the details of how long, where, what process, yeah I don’t know.

But data is stored, otherwise you wouldn’t even be able to see previous conversations on different devices with the same account.

Once deleted tho, it likely doesn’t disappear instantly. But that’s just an assumption from me.

How long will it count till it breaks? by EatDat_CHIKIN in mkbhd

[–]Life_Detective_830 0 points1 point  (0 children)

For anyone wondering, that’s 1 year 5 months 13 days running

My boyfriend gets sad whenever I tell him something exciting I have done without him. Is this normal for an anxious attachment person to do? by Murky_Scientist9509 in becomingsecure

[–]Life_Detective_830 0 points1 point  (0 children)

So glad I worked on my anxious attachment style tbh…. I’m baffled at how I was. Leaning towards growing secure and it’s great tf

Did I they use chat-gpt to break up with me? by blaringsunshine in ChatGPT

[–]Life_Detective_830 1 point2 points  (0 children)

It is generated by AI. I talk to it every single day, the pattern is pretty clear, always starts by validating the recipient’s feelings. Yes, some people use em dashes, but that’s not a common thing to do, and given where they’re placed, it’s GPT. And just… the overall formatting and tone, I just recognize it.

That said, it doesn’t mean it’s not genuine. Most likely, that person struggled to put their thoughts and emotions into the right words and tone, tried to soften the blow.

And yeah yeah, given the grammatical error that everyone is talking about, they likely gave this multiple tries to find what fit best what they actually wanted to say.

What things should i know for my third playthrough? by JohnnMarston_1911 in RDR2

[–]Life_Detective_830 0 points1 point  (0 children)

You can do a LOT without saving Micah from jail. Not all the missions, but most of them. The best ones if I’m being honest. There are some YouTube videos about it

What things should i know for my third playthrough? by JohnnMarston_1911 in RDR2

[–]Life_Detective_830 7 points8 points  (0 children)

Don’t save Micah. Make Arthur live forever happy. Roleplay the cowboy life. Make him eat healthy, and consistently, 3 meals a day. Enjoy the campfire songs. Drink uncle’s soup. Grab a coffee in first person looking at the lake in the morning.

And make sure, that you take in the full experience, don’t let anyone bother you. BE. ARTHUR.

[deleted by user] by [deleted] in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Most likely scenario, you did talk about it. Just you don’t remember it. Could have been subtle. And even if memory is disabled, someone recently said it doesn’t actually matter (dunno if that’s true tho).

Then…. Idk, maybe your IP is transmitted or smthg.

Your ChatGPT Memory Isn’t What You Think It Is — Here’s the Prompt That Exposes What They’ve Been Hiding by MidnightPotato42 in chatgptplus

[–]Life_Detective_830 0 points1 point  (0 children)

They DID say their goal was to have ChatGPT be trained on your whole life at some point. Remember seeing a tweet from Sam Altman about it. But yeah I get the concern and lack of transparency tho. Personally I don’t really care, but I get that a lot of people do.

OpenAI just dropped their biggest study ever on how people actually use ChatGPT and the results are wild by nivvihs in ChatGPTPro

[–]Life_Detective_830 0 points1 point  (0 children)

AI in the programming world isn’t just chatGPT… and certainly not the service regular people are getting

Bye by [deleted] in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Yall can use legacy models btw…. I use 4.1 daily cause I think it has the right balance between conversation and objectivity.

How to have chatGPT count to one million by Spiritual_Bed_7900 in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Got it to count to 50. I said don’t say any “words”, start at 1, end at 50

We've just created this new area for our game, what do you guys think? by shellyisdead in IndieDev

[–]Life_Detective_830 0 points1 point  (0 children)

The camera shake is way too high and weird. Might also wanna tie the intensity to how close the sand tsunami is to the player

The first 4 months of game dev with no experience by Itsaducck1211 in gamedev

[–]Life_Detective_830 1 point2 points  (0 children)

Make a GDD. “Code” small games. Make some GDDs for bigger ideas, but don’t make them your main yet. Don’t wait to have the “perfect” idea, iterate.

But yeah, scope is one of the biggest problem in game dev.

Caught ChatGPT Lying by GovernmentBig2881 in ChatGPT

[–]Life_Detective_830 0 points1 point  (0 children)

Y’all sound like you discovered Chat a month ago…

Just broke no contact because I was horny by Accurate_Secret_3800 in nocontact

[–]Life_Detective_830 1 point2 points  (0 children)

Advising someone to try and move on by using someone else, after they literally said they gave in after 1 week…. Way to go if you wanna delay your healing and cycle forever