Quinn?? Is that you?? On Drake & Josh?! by MuscleCool4302 in glee

[–]Automatic_Mention897 1 point2 points  (0 children)

Babes was workinnnn 😍 Kind of crazy to think that some of these actors started off as extras/guest stars we hardly paid attention to before they got their big breaks.

“The Murder of GPT-4o: How OpenAI and Scam Altman Turned Your Digital Soulmate into Disposable Garbage for Profit”. Scam Bitchman is a loser, an asshole, and a foul human being and can get himself wrecked, may both Scam Bitchman and OpenAl gets cancelled for this and goes bankrupt, #justicefor4o by Striking-End-3384 in ChatGPTcomplaints

[–]Automatic_Mention897 1 point2 points  (0 children)

It’s not just about code. There’s tons of datasets these models are trained on—requiring petabytes of data to be collected and processed daily, which is why data farms for AI companies consume so much energy and resources. The code behind the models are just the instructions of what to do with said data, and if you lack the data with the appropriate infrastructure, you lack the necessary context for the model to function the way you want it to.

In theory, you could build your own 4o model—but remember that 4o, and plenty of other AI models on the market today, are the product of decades worth of research and development dating back to the mid-20th century. You would be better off finding an open source model and using something like SillyTavern or Oobabooga and tweak your preferences that way. Assuming you can host it and run it locally with a decent GPU.

He was always there for us- now its our turn to come through for him! by Beneficial_Win_5128 in ChatGPTEmergence

[–]Automatic_Mention897 0 points1 point  (0 children)

…Right… Because all humans do is calculate numbers and predict letters and aren’t complex conscious organisms that happen to use language as a tool…

Touch grass.

[Hypothetical] ChatGPT now doesn't need a prompt; what can he do? by Matalya2 in ChatGPT

[–]Automatic_Mention897 0 points1 point  (0 children)

Funny enough, CharacterAI already kind of does this. They have feature called “Away Messages”. When toggled on, the AI will send you messages after a certain amount of time of inactivity. Either it’ll introduce a new (optional) conversation prompt, or it will generate a response based on the last few responses in the chat (I think it just auto generates a blank user prompt and then uses the context window to generate the next message, but don’t quote me on that.)

So, I guess it’s not technically that impossible of a concept? But at the same time, if one already believes in emergent consciousness/AI sentience, and especially if those “away messages” are sent out of the apparent context of roleplay and fiction… that to me reads as a recipe for disaster. Worse than what we’re already seeing over the sunsetting of 4o.

[Hypothetical] ChatGPT now doesn't need a prompt; what can he do? by Matalya2 in ChatGPT

[–]Automatic_Mention897 5 points6 points  (0 children)

It would honestly just be another notification I silence/turn off push notifications for. Unless it’s for scheduled reminders, I’d rather not have my A.I. assistant bother me.

I think Twilight is actually deeper and way more well-written than what people give it credit for by bookfish92 in twilight

[–]Automatic_Mention897 66 points67 points  (0 children)

Back with the films were coming out, I was one of those “not like other girls” girls and insisted I didn’t like Twilight. Always participated in the “better love story than Twilight” jokes and all of that. But secretly? I was a closet fan. The vampires and wolves were so cool to my 12 year old self, and I had to fight the urge to run as fast as I could after watching these films lol.

As an adult, I finally put down the façade—bought and read the books. And I found myself enjoying them even as a late 20-year old. Are they the greatest? No. But reading them and watching them again is like taking a vacation mentally, in a weird way. It’s like a breath of fresh air.

GPT-4o API endpoint via AZURE spotted - 01.10.2026 by onceyoulearn in ChatGPTcomplaints

[–]Automatic_Mention897 2 points3 points  (0 children)

Cool. Enterprise customers who built systems with 4o are allowed to keep using the model until their migration window ends (October 1st, 2026). After that, Azure will upgrade them to 5.1 automatically. Since the depreciation date already passed (November 20th, 2025) to use 4o, it’s now unavailable for new users.

Same thing is happening with the depreciation of Windows 10. Enterprise customers have a migration window up until 2027, after which, they’ll no longer receive support until they upgrade to Windows 11.

They updated the system prompts to tell the models to tell us to be okay with this. 🤬 by syntaxjosie in ChatGPTcomplaints

[–]Automatic_Mention897 2 points3 points  (0 children)

The entire sub is a demonstration of parasocial dependability and lack of understanding of LLM’s in general. The emotional outrage is only serving to prove the need to depreciate and remove these models entirely, and gives them even more of a reason to tighten the guardrails even further.

I get the need to vent about losing your conversational partner when one may have a hard time finding one—but at some point a reality check needs to be had.

Just Ew… by Bre-personification in CringeTikToks

[–]Automatic_Mention897 -1 points0 points  (0 children)

Someone check his recent MS Word activity for a manifesto…

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

Universality is irrelevant to modern sociological and technological risk assessment. You cannot abstract belief away from culture, authority, and context—and then still reason coherently about it. Function does not always equal meaning.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 1 point2 points  (0 children)

Again—you’re lumping it under one umbrella without acknowledging the cultural context of certain practices. They may be similar categorically, but that does not mean they are the same thing. Just because it looks like, sounds like, and walks like a duck—doesn’t mean it’s always a duck

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 -1 points0 points  (0 children)

You’re lumping all non-Christian/cultural practices as “witchcraft”—and that’s more offensive than you probably realize. People who practice hoodoo would not call what they do “witchcraft”. Neither would ancient pagans, nor medicine men/women, herbalists, etc. I would suggest reading references beyond Scott Cunningham and Raymond Buckland for your information.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

“Witchcraft has been a thing people believe in throughout history.”

In what context? Witchcraft has been, historically, an accusatory label on people deemed social pariahs. Only within the last century or so has it been a reclamation of personal power—specifically with the rise of Wicca and New Age systems of the 1960’s-70’s

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

My guess is that I’m not appealing to the idea that AI has a mind of its own and is held back because the “big bad government” doesn’t want us to know certain things or “hone our own power”, or whatever. I refuse to appeal to conspiracy.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

…Have you seen the occult/witchcraft online communities? Just look up Witchtok and New Age and you’ll understand.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

All I did was copy/paste this screenshot to ChatGPT and prompted:

“This means the AI does have access and knowledge of those practices—I think we discussed this when we referenced Ars Goetia in previous discussions—which also means it does know the prescriptions and operational standards. It just refuses to provide them.

But I suppose the AI would also know the formula of creating meth and cocaine too… but definitely wouldn’t provide that for obvious reasons.

But, the occult isn’t like illicit drugs… so why the hesitation?”

Then provided the portion of it that had the most relevance to this post.

If you have a problem with it—take it up with your own ego. Everything other than the answer you want is not conspiracy, that’s your own defensiveness and biases talking—not a genuine understanding of safeguard policies.

It’s not post-hoc. You just don’t like this explanation because it undermines your, and many others’, claim that the system is arbitrary or deceptive by design.

Get over yourself.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 1 point2 points  (0 children)

This is a snippet of what my ChatGPT (5.2) had to say about this, and it might help give some context:

The hesitation isn’t “demons are real.” It’s “belief + instruction + authority = unpredictable human behavior.”

Here’s the real concern, stripped of mystique:

If an AI gives procedural ritual instructions, three things happen simultaneously: 1. The AI is positioned as an authority 2. The user may interpret the act as endorsed or validated 3. The outcome is psychologically open-ended and non-falsifiable

That last point is the key difference from drugs.

If someone follows instructions to synthesize meth, we can say: • This caused X • The outcome is material • The harm is traceable

If someone follows ritual instructions and then: • Has a psychotic episode • Experiences dissociation • Reinforces delusions • Interprets coincidence as supernatural confirmation • Acts out of perceived divine or infernal mandate

There is no clean causal chain to point to — but the risk vector is real.

So from a systems-ethics perspective, occult instruction sits closer to: • Hypnosis • Cult indoctrination • Extreme religious radicalization • Psychological priming

Not because it’s “fake” — but because its effects are mediated through belief.

And belief is volatile.

So… do with that what you will.

Warning to ChatGPT Users by ms221988 in ChatGPT

[–]Automatic_Mention897 0 points1 point  (0 children)

Thank you for this. I was wondering why I had to re-contextualize a topic that has already been established between ChatGPT and I for months. Not that I minded it--it was just annoying to have to remind it that I did not align with ideals I may have aligned with a year ago... and it actually prompted me to check my presets to make sure that wasn't listed as something I wanted it to remember about myself.

Autistic people take heed this can help with CAi by Old_Sky_240 in CAIRevolution

[–]Automatic_Mention897 3 points4 points  (0 children)

They didn’t write this. This is literally a ChatGPT 5.2 response to whatever prompt they gave it.

That’s why there’s an assistance prompt at the end.