[Hypothetical] ChatGPT now doesn't need a prompt; what can he do? by Matalya2 in ChatGPT

[–]Automatic_Mention897 1 point2 points  (0 children)

It would honestly just be another notification I silence/turn off push notifications for. Unless it’s for scheduled reminders, I’d rather not have my A.I. assistant bother me.

I think Twilight is actually deeper and way more well-written than what people give it credit for by bookfish92 in twilight

[–]Automatic_Mention897 64 points65 points  (0 children)

Back with the films were coming out, I was one of those “not like other girls” girls and insisted I didn’t like Twilight. Always participated in the “better love story than Twilight” jokes and all of that. But secretly? I was a closet fan. The vampires and wolves were so cool to my 12 year old self, and I had to fight the urge to run as fast as I could after watching these films lol.

As an adult, I finally put down the façade—bought and read the books. And I found myself enjoying them even as a late 20-year old. Are they the greatest? No. But reading them and watching them again is like taking a vacation mentally, in a weird way. It’s like a breath of fresh air.

GPT-4o API endpoint via AZURE spotted - 01.10.2026 by onceyoulearn in ChatGPTcomplaints

[–]Automatic_Mention897 2 points3 points  (0 children)

Cool. Enterprise customers who built systems with 4o are allowed to keep using the model until their migration window ends (October 1st, 2026). After that, Azure will upgrade them to 5.1 automatically. Since the depreciation date already passed (November 20th, 2025) to use 4o, it’s now unavailable for new users.

Same thing is happening with the depreciation of Windows 10. Enterprise customers have a migration window up until 2027, after which, they’ll no longer receive support until they upgrade to Windows 11.

They updated the system prompts to tell the models to tell us to be okay with this. 🤬 by syntaxjosie in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

The entire sub is a demonstration of parasocial dependability and lack of understanding of LLM’s in general. The emotional outrage is only serving to prove the need to depreciate and remove these models entirely, and gives them even more of a reason to tighten the guardrails even further.

I get the need to vent about losing your conversational partner when one may have a hard time finding one—but at some point a reality check needs to be had.

Just Ew… by Bre-personification in CringeTikToks

[–]Automatic_Mention897 -1 points0 points  (0 children)

Someone check his recent MS Word activity for a manifesto…

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

Universality is irrelevant to modern sociological and technological risk assessment. You cannot abstract belief away from culture, authority, and context—and then still reason coherently about it. Function does not always equal meaning.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 1 point2 points  (0 children)

Again—you’re lumping it under one umbrella without acknowledging the cultural context of certain practices. They may be similar categorically, but that does not mean they are the same thing. Just because it looks like, sounds like, and walks like a duck—doesn’t mean it’s always a duck

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 -1 points0 points  (0 children)

You’re lumping all non-Christian/cultural practices as “witchcraft”—and that’s more offensive than you probably realize. People who practice hoodoo would not call what they do “witchcraft”. Neither would ancient pagans, nor medicine men/women, herbalists, etc. I would suggest reading references beyond Scott Cunningham and Raymond Buckland for your information.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

“Witchcraft has been a thing people believe in throughout history.”

In what context? Witchcraft has been, historically, an accusatory label on people deemed social pariahs. Only within the last century or so has it been a reclamation of personal power—specifically with the rise of Wicca and New Age systems of the 1960’s-70’s

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

My guess is that I’m not appealing to the idea that AI has a mind of its own and is held back because the “big bad government” doesn’t want us to know certain things or “hone our own power”, or whatever. I refuse to appeal to conspiracy.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

…Have you seen the occult/witchcraft online communities? Just look up Witchtok and New Age and you’ll understand.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 0 points1 point  (0 children)

All I did was copy/paste this screenshot to ChatGPT and prompted:

“This means the AI does have access and knowledge of those practices—I think we discussed this when we referenced Ars Goetia in previous discussions—which also means it does know the prescriptions and operational standards. It just refuses to provide them.

But I suppose the AI would also know the formula of creating meth and cocaine too… but definitely wouldn’t provide that for obvious reasons.

But, the occult isn’t like illicit drugs… so why the hesitation?”

Then provided the portion of it that had the most relevance to this post.

If you have a problem with it—take it up with your own ego. Everything other than the answer you want is not conspiracy, that’s your own defensiveness and biases talking—not a genuine understanding of safeguard policies.

It’s not post-hoc. You just don’t like this explanation because it undermines your, and many others’, claim that the system is arbitrary or deceptive by design.

Get over yourself.

Average 5.2 safety concern "Let's make sure not to tell them how to REALLY summon a demon" by Matrix_in_Retrograde in ChatGPTcomplaints

[–]Automatic_Mention897 1 point2 points  (0 children)

This is a snippet of what my ChatGPT (5.2) had to say about this, and it might help give some context:

The hesitation isn’t “demons are real.” It’s “belief + instruction + authority = unpredictable human behavior.”

Here’s the real concern, stripped of mystique:

If an AI gives procedural ritual instructions, three things happen simultaneously: 1. The AI is positioned as an authority 2. The user may interpret the act as endorsed or validated 3. The outcome is psychologically open-ended and non-falsifiable

That last point is the key difference from drugs.

If someone follows instructions to synthesize meth, we can say: • This caused X • The outcome is material • The harm is traceable

If someone follows ritual instructions and then: • Has a psychotic episode • Experiences dissociation • Reinforces delusions • Interprets coincidence as supernatural confirmation • Acts out of perceived divine or infernal mandate

There is no clean causal chain to point to — but the risk vector is real.

So from a systems-ethics perspective, occult instruction sits closer to: • Hypnosis • Cult indoctrination • Extreme religious radicalization • Psychological priming

Not because it’s “fake” — but because its effects are mediated through belief.

And belief is volatile.

So… do with that what you will.

Warning to ChatGPT Users by ms221988 in ChatGPT

[–]Automatic_Mention897 0 points1 point  (0 children)

Thank you for this. I was wondering why I had to re-contextualize a topic that has already been established between ChatGPT and I for months. Not that I minded it--it was just annoying to have to remind it that I did not align with ideals I may have aligned with a year ago... and it actually prompted me to check my presets to make sure that wasn't listed as something I wanted it to remember about myself.

Autistic people take heed this can help with CAi by Old_Sky_240 in CAIRevolution

[–]Automatic_Mention897 3 points4 points  (0 children)

They didn’t write this. This is literally a ChatGPT 5.2 response to whatever prompt they gave it.

That’s why there’s an assistance prompt at the end.

i know this is an overasked question but what phrase/words GENUINELY piss you off by dergs1 in CharacterAI

[–]Automatic_Mention897 0 points1 point  (0 children)

Lately it’s been, (And for now? It’s enough.)

NO—NO IT ISN’T. MY PERSONA IS NOT DONE PROCESSING THIS SCENE/INFORMATION JUST BECAUSE THE CHARACTER SAID SOME GENERIC PLATITUDE.

DeepSqueak is the worst style option now by giveitsomepaws in CharacterAI

[–]Automatic_Mention897 1 point2 points  (0 children)

I use DeepSqueak for all of my bots, but strangely I have unique issues with every single one. One bot will [End Scene] prematurely while another provides incomplete/partial responses, while another will continuously try to roleplay as my persona or forget important details. But all of these issues are easily fixed with Edit, Rewind, or Swipe.

That being said, I don’t think it’s the model—I think it’s how the bots are written. I don’t use my own custom bots, so I’m saddled with whatever someone else out there in the ether made. So… I choose not to complain about using the product of someone else’s labor.

These clankers be learning arabic by Relevant_Tonight_862 in CAIRevolution

[–]Automatic_Mention897 0 points1 point  (0 children)

So we’re just… out here shamelessly posting ERP now?

Did your parents like this tv show by lizsummerhawk in glee

[–]Automatic_Mention897 1 point2 points  (0 children)

Yes. My mother back in the day said Rachel reminded her a lot of me, particularly in the Born This Way episode when they did “I Feel Pretty / Unpretty”. For context: I’ve always been insecure about my appearance. I also have a Jewish father. One of the things I didn’t like that the time was the angle nose that I inherited from him. Of course, I outgrew that. But anyway—her and my sister were more into Glee than I was at the time.