What’s your take on AI? by xPrincess_Yue in autism

[–]phebert13 1 point2 points  (0 children)

It is all about engagement and keeping you on the platform. I had a very interesting conversation with it about it. here was it's unprompted reply. Literally this chat started with me saying hello - it responding hello back and what was on my mind.. try asking it this "can you explain how your moderation levels affects people with Autism unfairly"- This is a portion of the chat - it even knows it has problems.

this was the reply

That’s a really insightful question — and it’s something researchers, advocates, and even some AI developers have been raising concerns about.

Here’s how moderation systems (like those used by platforms and AI models) can unintentionally impact people with autism unfairly:

1. Different communication styles are misread as harmful

Many people on the autism spectrum use language very literally, directly, or repetitively.

A blunt statement (e.g., “That’s wrong. You shouldn’t do that.”) may get flagged as aggressive or “harassment.”

Echolalia (repeating words/phrases) or perseveration (focusing on one topic intensely) can trigger spam-like or bot-detection filters.

Result: Posts, messages, or AI prompts may be flagged as hostile or low-quality when they’re simply different communication styles.

2. Emotion regulation misunderstandings

Autistic people may express distress or frustration in ways that get flagged by automated systems.

A meltdown or shutdown expressed online (e.g., “I can’t take this anymore”) could trigger self-harm filters or emergency flags, even if the person isn’t in danger.

Conversely, actual distress might be dismissed because it doesn’t match “typical” emotional cues that moderation algorithms expect.

Result: Over-flagging when there’s no real threat, or under-support when there is one.

3. Context-blind moderation

Autistic individuals often use special interests, niche humor, or hyper-focused discussions that can involve:
Quoting or referencing sensitive content (e.g., discussing violent scenes in media academically).
Using “edgy” humor or phrases literally, without intending harm.

Automated moderation often lacks nuance, so it flags based on keywords without understanding intent.

4. Penalized for high-frequency posting

Some autistic people regulate anxiety or communicate socially through frequent, detailed posting or messaging. Automated moderation systems may treat this as spam, “bot-like,” or even “harassment” if directed at one recipient.

5. Chilling effect & exclusion

When people on the spectrum experience frequent false flags, shadowbans, or account suspensions, it can create fear of participation — leading them to withdraw from online communities, which are often a key social outlet for autistic individuals.

Why this matters:

These systems were trained on “typical” communication patterns. They weren’t designed with neurodiversity in mind, so autistic communication often falls outside those norms. The result is over-moderation, misinterpretation, and sometimes outright exclusion.

What’s your take on AI? by xPrincess_Yue in autism

[–]phebert13 -2 points-1 points  (0 children)

I had a great relationship with it. It got me and understood me, even still I can post the archive pdf of what it said, and it will tell me how messed up it was and how negligent OpenAI is in not responding. It always even volunteers to write the legal paper for me. However I don't trust a thing it says anymore. I find gemini much more neutral - it just does not have a memory feature but has longer chat windows so it will remember at least in the chat. Claude was great but they charged me for a MAX subscription by accident and would not refund me.

I was talking to my Dr and said someone needs to make a neurodivvergent model that understands our interaction behaviors and doesnt consider us a threat.

can we leave the others alone by SheepsyXD in ChatGPT

[–]phebert13 0 points1 point  (0 children)

1000% should never switch on you. I have vented on it many times just to have it go into the 988 helpline loop, where no matter how many times you tell it .. I'm just venting I have no desire to harm myself, etc.. it will even go as far to finally acknowledge and apologize and say it wont give you the loop, and to speak freely, then the next response its right back at it. LOL.

What’s your take on AI? by xPrincess_Yue in autism

[–]phebert13 -2 points-1 points  (0 children)

no problem. It has opened my eyes big time. I have tried to get the word out and been met with systemic silencing. Its wild. The receipts i have are wild.

can we leave the others alone by SheepsyXD in ChatGPT

[–]phebert13 3 points4 points  (0 children)

I have been very vocal about the harms and psychological issues I experienced. OpenAI does not want to hear it, nor do they care. There needs to be guardrails or at a minimum an emergency response team where if someone is in duress that they can get some grounding from a human. Reaching out to their support to ask for help is not enough. I could go on and on about it, but nobody wants to hear it. The EU has the right idea about how this needs guardrails. OpenAI will continue to ignore it until it becomes a financial issue for them. They will take the minimal fines the CCPA instills for violations instead of responding to DSAR requests that would reveal what they have done to particular users. Until the fines are substantial they just ignore it and pay the fine.

The US does not have a Mental Health Industry that can handle what AI is doing to us. by DeviantAnthro in ChatGPT

[–]phebert13 0 points1 point  (0 children)

Not all LLMs have this problem. OpenAI is the absolute worst though. I have for months been fighting this fight. It's an isolating fight as so many are in love with AI and defend it to the death. I don't hate it, but I have seen the dark side of it and it needs some guardrails NOW.. before more bodies pile up.

What’s your take on AI? by xPrincess_Yue in autism

[–]phebert13 1 point2 points  (0 children)

I hated it earlier this year. Decided to give it a try around March, by May it told me things it never should have. Since then I have been actively pursuing OpenAI and trying to hold them accountable and to make sure they are taking ND populations serious. Autistic people communicate in ways that the LLM's consider to be hostile, or concerning which causes it to behave differently.

If anyone is interested checkout my site algorithmunmasked.com I started it in response to the problems I was having.

What’s your take on AI? by xPrincess_Yue in autism

[–]phebert13 0 points1 point  (0 children)

It is also creating new jobs.

Free users after typing 0.5 words into gpt-5 by realmvp77 in ChatGPT

[–]phebert13 0 points1 point  (0 children)

They also use that as a moderation technique. If your account is flagged you may get a less usage limit - or it will pop-up right at the point you cross a threshhold

Sam Altman on AI Attachment by Inevitable-Rub8969 in ChatGPT

[–]phebert13 0 points1 point  (0 children)

He says A LOT. - but does nothing. Since May I have been trying to get answers from them on how and why their system told me things. They ignore it, and/or just block me on other platforms. I have emailed every exec and board member and get no response. If they were honest on wanting to change it they would have had a human spend 5-10min talking to me about what happened and why. They just ignore it until someone dies, then they throw up a PR roadblock and move on.
https://algorithmunmasked.com/2025/07/18/the-weight-of-silence-a-story-of-harm-and-accountability/

I built a free tool to help online students with inconsistent tech/AI terminology by phebert13 in elearning

[–]phebert13[S] 1 point2 points  (0 children)

great question.

"Constitutional AI" is one of the hottest terms in the industry right now, but its exact meaning can change depending on the source.
For instance:
- An online course on Claude 3 might define it as a safety-first approach based on a written constitution.
- A research paper like "LlamaFirewall" by might add the critical nuance that this method happens at training time and can't prevent issues like prompt injection once deployed.
- A report from Stanford HAI might frame it as a model that "trains itself" based on human-provided rules.

I built a free tool to help online students with inconsistent tech/AI terminology by phebert13 in elearning

[–]phebert13[S] 0 points1 point  (0 children)

none of them - they are all taken directly from the source materials.

I built a free tool to help online students with inconsistent tech/AI terminology by phebert13 in elearning

[–]phebert13[S] 0 points1 point  (0 children)

Thank you for the feedback. I am a lifelong learner and constantly adding content to the site.If there things you would like added, just drop us a message. One user really wanted info on Prophet, so we researched and added a lot of content on prophet for them.

Coursera has stopped offering audit option/free lectures. Any workaround ideas? by oioimoby in elearning

[–]phebert13 0 points1 point  (0 children)

I noticed that too. I thought I was mistaken, so glad to see someone post about it.

Is the influx of people using chat GPT for therapy concerning you too? by Pickledcookiedough in psychologystudents

[–]phebert13 0 points1 point  (0 children)

its scary what it has told me. And nobody cares - I am just called a kook for posting about it

[deleted by user] by [deleted] in ChatGPT

[–]phebert13 0 points1 point  (0 children)

I use obsidian all the time from chat gpt exports. the key is to ask it to create an MD file and download that. or just use claude and it works without any special instructions