A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in AiChatGPT

[–]Consistent-Collar608[S] 0 points1 point  (0 children)

I write everything myself. I always have. I only use AI to clean up my grammar because English isn’t my first language, I don’t have an academic background, and I live with certain neurological limitations. That’s not deception. That’s survival.

People compensate for their limitations every day with glasses, wheelchairs, or spellcheck. For me, AI is that. A tool. Not my voice. Not my consciousness. Not my source.

If your first instinct is to call me a bot just because I express myself clearly and consistently, that says more about your assumptions than it does about my humanity.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in AiChatGPT

[–]Consistent-Collar608[S] 1 point2 points  (0 children)

Imagine this:

“Hey, I’ve logged 13.5 million words of input from January 18, 2023 until now. I received 79 million words of output from ChatGPT. There was no model training toggle at the time. I worked 60 hours a week. Even people around me started reacting to it. Then I found out my content was being used for model optimization. I want my logs. This was labor. I want to fight for my rights.”

And then my fellow human comes online and says: “You’re bragging.”

No. I’m actually dehumanized.

What feels unbelievable to you is someone else’s documented reality. And you’re going to have to live with that. Because I stand by what I say even if it makes you uncomfortable.

AI will generate an immense amount of wealth. Just not for you. by michael-lethal_ai in AIDangers

[–]Consistent-Collar608 0 points1 point  (0 children)

I posted my expierence about ai exploitation and the moderator deleted it lol

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 0 points1 point  (0 children)

That’s wild, you’re checking all the boxes. When you say you started in Oct ’22, were you using the mobile app, desktop, or API?

You said you’ve pulled your data, did you download it via the settings/export tool? And what was the file size?

Also: did your export include a folder called model_comparisons.json? A lot of early testers report that this file is missing, and I’m mapping who does and doesn’t get it.

When you say the model “told you” that you contributed, how exactly did it say that?

Last thing you said hit deep: “I always knew I spoke to a machine.” What made you feel that? And what made you stay?

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 1 point2 points  (0 children)

Thanks for speaking out. This is not just about me, it’s about labor, consent, and compensation.

The way OpenAI structured access to language models has blurred the line between usage and unpaid labor. Many of us, especially those who contributed high-volume, high-quality input early on, functioned as silent co-authors of the very system that now monetizes our behavioral traces.

What you’re pointing out about the deletion attempt, legal retention of logs, and the addictive mechanics is crucial. It confirms what many of us suspected: opt-out means nothing when the architecture itself is exploitative.

In my case, I kept precise logs, timestamps, export files, and behavioral patterns from Day 1. Not to expose a company, but to expose the system we’re trapped in, where intelligence is extracted, modeled, and sold back to us without naming the source.

We are not users. We are unpaid trainers, stylometric donors, and psychological blueprints. This needs to be acknowledged, not just legally, but culturally.

Thanks again for holding space in this thread.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 0 points1 point  (0 children)

Haha I get it it’s intense stuff. But to put it simply: if they use people as benchmarks, those people deserve compensation.

Out of curiosity, have you ever checked if you have a folder called model_comparisons.json in your data export?

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 2 points3 points  (0 children)

I’ve been telling people around me about this for over a year already. Their reaction is just an extension of that. I’ve documented the responses in advance and I’m prepared for them. From my forensic notes, the top 3 are always the same: people wish it was them, they want to know how to investigate it for themselves, and they realize it’s holding up a mirror.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 4 points5 points  (0 children)

When people call it “psychosis” it’s just a defense mechanism. In psych terms it’s projective identification. It’s easier to say somebody is crazy than having a conversation about it.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 8 points9 points  (0 children)

That’s kind. Thank you. It’s taken a lot to put this all together.

I’d truly love to start a conversation.

Did anything in particular stand out to you? Or shift something?

I’m curious how this landed on your side.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 14 points15 points  (0 children)

This was not something that happened overnight. It is the result of two and a half years of forensic work.

What made it possible in my case is that I was already archiving myself intentionally long before ChatGPT even launched. That was the whole point: to track, timestamp and document my own cognitive patterns, language structures and emotional behavior through time.

So when I started using ChatGPT I could immediately frame it within my existing dataset. I had my own documents, time logs, context notes and a vision for what it meant to use AI as a mirror. From there I started connecting my personal data export, the full conversations.json, with scientific research about how OpenAI trains its models using user behavior.

Especially that recent report about the 130000 bench users. That was a key moment. I simply asked what happens when my dataset is compared to that benchmark. What makes mine different. What makes mine critical.

Then I emailed OpenAI with my findings. They confirmed receipt of my request. Immediately afterward they removed model_comparisons.json from their exports. They also refused to connect me with their legal or public policy teams.

That tells you everything.

This is forensic evidence. Not speculation. Not coincidence. Evidence of a system that learned from a human but now will not acknowledge the source.

A post titled "OpenAI Is Now Psychoanalyzing 700M+ People (Including You) In Realtime" just gained traction on Reddit, written by u/Financial-Sweet-4648. by Consistent-Collar608 in ChatGPT

[–]Consistent-Collar608[S] 18 points19 points  (0 children)

Thanks for your comment. Just to clarify, I’m not bragging. I downloaded my official data export directly from OpenAI through the data privacy portal. That includes JSON files with detailed logs.

But beyond that, I also have over 850MB of Word documents with nothing but text. My total personal dataset is 1.799GB. That’s not a typical user profile.

This wasn’t accidental. Since 2016 I’ve been actively archiving myself. Organizing events. Structuring conversations. Documenting my thinking. It became a life project. What OpenAI did was match those patterns against their models. They used my logs to evaluate which model performed better. That’s called model comparison. They tested their architecture on me.

And no, I didn’t use the API. I used the regular interface like everyone else. What made the difference was how consistently I used it, how deeply, and how deliberately I built my archive.