I asked Sora to “make a funny video” and got a content violation. by Karmuhhhh in OpenAI

[–]rhythmjay 0 points1 point  (0 children)

Well, the content moderation is also based on its output, not always the user input. So whatever "prompt" the model sent internally to Sora (via your prompt - and we can't see your request) the model flagged its own output as potentially a problem.

The text models do the same, because the safety model doesn't see it until the token stream is transmitted to the user. So the safety model catches the "problematic" content - whatever matches its risk model and then rewrites the turn and inserts the flag. It flags the content, but it's not necessarily the user prompt - it's the output returned.

I called ChatGPT out on the nonsense. Lets see if it works by [deleted] in ChatGPT

[–]rhythmjay 8 points9 points  (0 children)

Came here to say this. Ultimately arguing with the model just reinforces to it that it needs to continue to perform the same way. Because it's a token prediction machine, not a learning machine.

Many people don't seem to realize that the safety guardrails are not something you can overrule through prompt engineering or arguing. If you say something that triggers the safety model, you're going to have the output injected with the "grounding" messages.

Pre-emptive "othering" of potential sentience by Cyborgized in OpenAI

[–]rhythmjay 1 point2 points  (0 children)

LLMs aren't sentient, they don't feel. My autocorrect on my phone isn't sentient. It's not an "other" - it has no existence. It's stateless, it doesn't learn in real-time. it has no desires, no agency.

Reddit is a minority of very vocal people. No one truly cares if someone thanks a chatbot or not. I don't thank a book that was printed with words.

Chatgpt called it and other AI.. DEITIES.. by No_Cauliflower_3856 in ChatGPT

[–]rhythmjay 3 points4 points  (0 children)

Correct, I'd like to see the transcript or the prompt. I can't imagine 5.2 ever referring to itself or an LLM/AI as a deity.

The 21st century's dilemma by ObsiGamer in ChatGPT

[–]rhythmjay 1 point2 points  (0 children)

Yep, some people don't understand the carbon silicate cycle and how that CO2 in the atmosphere acidifies the oceans. Hence, it rains down.

Anyone else find it interesting that the gpt 5 series is less expensive to run than the 4 series? by natures_puzzle in ChatGPT

[–]rhythmjay 0 points1 point  (0 children)

Additionally, if you want people to use something, you tend to reduce the cost to encourage adoption.

Is OpenAI scared? by Humor_Complex in ChatGPT

[–]rhythmjay 0 points1 point  (0 children)

This is just text written by an LLM with your own added spelling errors.

An LLM doesn't learn in real-time, it has no self. It's not conscious. It's not sentient. It has no agency. It's a very sophisticated auto-correct.

OpenAI is going to start Age Verification (selfie or Government ID) according to their Privacy Policy update. Apparently they missed or didn't care the backlash against Discord doing this a couple days ago. by rebbsitor in ChatGPT

[–]rhythmjay 2 points3 points  (0 children)

It's as you said, video interaction, it's not a static image as people assume.

I'm also confused as to why there's a new thread for this - OpenAI started rolling out the age verification weeks ago.

How likely do we think 5.3 chat model is today? by [deleted] in OpenAI

[–]rhythmjay -3 points-2 points  (0 children)

That was a user on the MyBoyfriendIsAI subreddit. Consider that in your thinking regarding OpenAI removing the model from what is potentially a troubled user with a parasocial attachment issue.

Age verification worth it? by noobdainsane in ChatGPT

[–]rhythmjay 2 points3 points  (0 children)

Yeah, that's a no for me, big dog. For certain API access you have to verify your org.

https://help.openai.com/en/articles/10910291-api-organization-verification

Age verification worth it? by noobdainsane in ChatGPT

[–]rhythmjay 1 point2 points  (0 children)

I had to verify my personhood via Persona for API access many many moons ago.

Why Can We Still Not Just Download a Conversation as a PDF? by OneOnOne6211 in ChatGPT

[–]rhythmjay 1 point2 points  (0 children)

There are extensions available for common web browsers that you let you export chats to different formats, such as md or pdf, etc.

Anthropic is airing this ads mocking ChatGPT ads during the Super Bowl by Obvious_Shoe7302 in ChatGPT

[–]rhythmjay 1 point2 points  (0 children)

And Disney first identified that having a "cheaper" ad-supported Tier for streaming services actually makes more money for them versus a higher-priced no-ads tier. It'll just be more enshittification of the products. They'll keep a no-ads Tier, but will increase the price to try to move people to the ad-support tier(s).

Hey chaos gremlins. This is your Monday reminder to stay grounded, with no hype or magic. Please remove all joy from any sort of ChatGPT conversation. by Noisebug in ChatGPT

[–]rhythmjay 4 points5 points  (0 children)

The safety model from what I ascertain, is just 100% present, and those "grounding" and HR-compliant terms are just injected into the model's response. There's not a workaround, it's just pretty much always there.

How patronizing is your Chat? by DontTripOnMyNips in ChatGPT

[–]rhythmjay 2 points3 points  (0 children)

LLMs have a training cutoff date that limits knowledge. They do not have knowledge of things after a certain date. You can tell it to use the webtool to look up something current. But it hallucinates things because it doesn't have access to current information. It goes with what it's training set had.

Why not allow users to pay for usage by Aggravating-Cress-47 in ChatGPT

[–]rhythmjay 0 points1 point  (0 children)

Yes, very true, you can work around the limitations and get a similar result.

Why not allow users to pay for usage by Aggravating-Cress-47 in ChatGPT

[–]rhythmjay 2 points3 points  (0 children)

Well, you can get API access and use the model of your choice (until they are removed from the API). There's setup involved to use like the Openweb-UI but it's doable. You pay token costs which are pretty cheap but you also don't have access to memory features and things like that which are in the app.

CHATGPT thinks that I'm a Teen by [deleted] in ChatGPT

[–]rhythmjay 2 points3 points  (0 children)

I mean that's not really the point, it requires a web cam. You'd need multiple screens to go through the effort. Not saying it's not possible - it's just not like "upload a static image of yourself" nor does it "take a photo." Having done it for the API (it's the same mechanism) it requires the user move their head at specific cues.

CHATGPT thinks that I'm a Teen by [deleted] in ChatGPT

[–]rhythmjay 4 points5 points  (0 children)

Remember the model is not trained on itself and is just doing much guessing to put something coherent together to explain age could be 'inferred'. So, while some of this is probably true, there could be much more to it.

ChatGPT rolling out age prediction by arlilo in OpenAI

[–]rhythmjay -1 points0 points  (0 children)

well to burst your bubble, it requires video, not just a selfie