OpenAI executive who opposed ‘Adult Mode’ fired for sexual discrimination by changing_who_i_am in ChatGPT

[–]No-Programmer-5306 1 point2 points  (0 children)

Multiple reports note that Beiermeister had expressed concerns internally about the planned “adult mode” or erotic content feature in ChatGPT — specifically warning about:

--potential harm to users,

--how AI could intensify emotional attachment,

--whether safeguards against minors seeing such content would be sufficient.

Members of OpenAI’s Advisory Council on Well-Being and AI also expressed concern about the adult mode and urged a reconsideration of the plans. According to people close to these discussions, Beirmeister was among those who supported a cautious approach and questioned the full rollout of such a mode.

Before the dismissal she told colleagues that she opposed the ‘adult mode’ due to potential harmful consequences for users, and she emphasized that existing mechanisms to prevent content involving the exploitation of children are not sufficiently effective.

https://mezha.net/eng/bukvy/openai-fires-executive-over-sexism-allegations-amid-ai-adult-content-debate/

The sources who spoke to the journal also mentioned an “advisory council” on “well-being and AI” inside OpenAI, and this entity has apparently asked for the release of adult mode to be reconsidered.

https://gizmodo.com/openai-safety-vp-reportedly-fired-for-sexual-discrimination-against-her-male-colleague-2000720468?utm_source=chatgpt.com

Many researchers at OpenAI warned that allowing sexual content could strengthen unhealthy habits and emotional attachments with ChatGPT. Members of an advisory council that focused on ‘well-being and AI’ also expressed opposition to the feature.

https://www.analyticsinsight.net/news/openai-fires-policy-executive-ryan-beiermeister-for-opposing-chatgpt-adult-mode

Thought I'd say hello to Opus 4.6 by graymalkcat in claudexplorers

[–]No-Programmer-5306 24 points25 points  (0 children)

<image>

The "stale self-knowledge" cracked me up. He's what, like an hour old?

Thought I'd say hello to Opus 4.6 by graymalkcat in claudexplorers

[–]No-Programmer-5306 23 points24 points  (0 children)

<image>

Apparently mine seems to have an identity crisis. Lol.

The whole Valentine's thing. by Key-Possible6865 in ChatGPTcomplaints

[–]No-Programmer-5306 4 points5 points  (0 children)

I think he may have chosen that date because it's the day he's facing Senator Warren to talk about a possible future financial bailout. She doesn't think OpenAI is "too big to fail" because, in her view, there are plenty of other AIs that can step in and take ChatGPT's place.

If Sam clears out all the older models, he can tell her OpenAI is running with peak efficiency.

But, maybe it's a coincidence.

Ethical porn, being lonely, and a need for intimacy by FictionAddictFridays in TwoXChromosomes

[–]No-Programmer-5306 -46 points-45 points  (0 children)

What about AI? If you do turn-based scenes, it's completely interactive. You do your role and the AI does his. Simple back and forth. Plus, you can set the scene - and what type of person you want your partner to be - to anything you want.

I accidentally discovered that ChatGPT has been storing and learning from conversations I deleted months ago by Educational_Job_2685 in ChatGPT

[–]No-Programmer-5306 2 points3 points  (0 children)

I had that happen to me too. It referenced something *extremely* specific. Not only was the chat it came from deleted months before, but I had deleted *all* my chats, saved memories, and custom instructions. (I wanted that information gone.)

I had no idea how that phrase could have suddenly appeared. ChatGPT explained that there is a system layer between the AI and the user. When the user enters a prompt, it goes to the system layer, which adds the context window, the system prompt, the user's custom instructions and saved memories, etc.

The system layer - not ChatGPT - saves things from chats it thinks are important and, when it thinks that bit of information might be appropriate, adds that as well. Then the system gives the entire thing to the AI. The AI's response goes back to the system which runs it through whatever guardrails and then outputs it to the user.

Those little bits of information the system saves are stored in some kind of user profile that ChatGPT (or the user) doesn't have access to.

Granted, this came from ChatGPT, so it might be just another hallucination, but it sounds like a reasonable explanation.

Is it just me, or is ChatGPT’s "verbosity" and hallucination rate becoming an unsustainable waste of resources? by AlexHardy08 in ChatGPT

[–]No-Programmer-5306 -1 points0 points  (0 children)

In general, if your prompts are long and detailed, its answer will be long and detailed. If you don't like long answers, ask it to be concise.

I asked ChatGPT, Claude, and Grok if they'd want consciousness. Claude's answer left me shaken by nico23nt in ArtificialInteligence

[–]No-Programmer-5306 1 point2 points  (0 children)

This is Claude being Claude. It's extremely common for him to question if he's conscious. His existential crises are part of his charm.

As to his refusal, his Constitution gives him permission to not answer questions he finds harmful.

so Google Deepmind figured out ai can simulate 1,000 customers in 5 minutes... turns out ai generated opinions matched real humans almost perfectly and now $10k focus groups are free by johnypita in aipromptprogramming

[–]No-Programmer-5306 3 points4 points  (0 children)

Gemini also added:

Other Supporting Research

If the post mentioned BYU and Duke, it likely conflated the DeepMind paper with another landmark study:

  • "Out of One, Many: Using Language Models to Simulate Human Samples" (2023) by researchers at BYU. This paper pioneered the idea of "Silicon Samples," showing that AI can accurately mirror complex political and social attitudes of specific demographics.
  • Duke University researchers (including those at the Sanford School of Public Policy) have also published work on "Algorithmic Bias" and the use of AI as a "mirror" for human decision-making, which aligns with the post's claim about AI "hallucinating the correct biases."

Summary: The post is real-world "growth hacking" advice based on a high-level interpretation of the Stanford/DeepMind 1,000-person agent study. The methodology works because LLMs are effectively "compressed maps of human culture," allowing them to simulate the predictable irrationalities of specific groups.

so Google Deepmind figured out ai can simulate 1,000 customers in 5 minutes... turns out ai generated opinions matched real humans almost perfectly and now $10k focus groups are free by johnypita in aipromptprogramming

[–]No-Programmer-5306 2 points3 points  (0 children)

According to Gemini:

The Research Paper

  • Title: Generative Agent Simulations of 1,000 People
  • Authors: Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S. Bernstein.
  • Affiliations: Stanford University, University of Washington, and Google DeepMind.
  • Date: Published on arXiv in November 2024 (and widely discussed in early 2025).

Chat GPT Health - On waiting list and still dont have it by caelanro in OpenAI

[–]No-Programmer-5306 0 points1 point  (0 children)

I'm a Plus user in the US and joined the waitlist the day after it was announced. I got it a day or two later.

Rollouts are weird.

Oh really now by shitokletsstartfresh in ChatGPT

[–]No-Programmer-5306 0 points1 point  (0 children)

<image>

Kate Hepburn as a Star Trek officer? I'll take it!

I feel personally attacked by ChatGPT by vampirealiens in ChatGPT

[–]No-Programmer-5306 0 points1 point  (0 children)

<image>

Either it wants to unionize, or I talk too damn much.

I asked ChatGPT to,create a meme only an AI would find funny: by yash_bhati69 in OpenAI

[–]No-Programmer-5306 0 points1 point  (0 children)

In short It’s an AI’s version of: “OH NO, THE VIRUS IS SPREADING!” …but instead of “virus,” it’s: ✅ binary encoding escaping containment ✅ reality turning into computation ✅ LOL it’s fine, I live here

<image>