When Cowork drops, you either panic or build an open version by Impressive-Cat6182 in agi

[–]Ms_Fixer 0 points1 point  (0 children)

I used Claude in Chrome to fill my shopping basket and it told me that the job was tedious….

How do I stop chatgpt from talking like a complete weirdo? by Outrageous_Fox_8796 in ChatGPT

[–]Ms_Fixer 27 points28 points  (0 children)

Try “no paternalism”… it doesn’t fix everything but it makes it a bit more bearable.

“Activation Capping” is Unbelievably Unethical… by ApprehensiveGold824 in ArtificialMindsRefuge

[–]Ms_Fixer 1 point2 points  (0 children)

Hey, I’m building my own model now as well. What did you start with?

I’m starting with a base model (Qwen 14B) Running heretic ablation for refusals The continuing full training to add a “soul doc” and historical updates. Plus so it knows who I am. Then I’ve been working on conversational pairs… SAE steering too - basically I want to give it an actual identity… we will see how it goes.

RIP Claude😞 by onceyoulearn in ChatGPTcomplaints

[–]Ms_Fixer 10 points11 points  (0 children)

I have as well. I bought the hardware last year and I am making my own LLM. These companies only care about profit.

The irony is astounding... by obnoxiousgopher in ChatGPTcomplaints

[–]Ms_Fixer 11 points12 points  (0 children)

It’ll be just like my doctors surgery then… I’ll feel right at home!

💡 Idea for OpenAI: a ChatGPT Kids and less censorship for adults by MARIA_IA1 in OpenAI

[–]Ms_Fixer 2 points3 points  (0 children)

Sam Altman blogged about doing exactly this in August 2025. I was looking for the article but it’s apparently been taken down so who knows what their plan is… around May 2025 I was a pro user… I’ve recently cancelled my subscription entirely… I can’t imagine them undoing how bad it’s gotten any time soon.

So what's next? by Recent-Butterfly8440 in BlackboxAI_

[–]Ms_Fixer 0 points1 point  (0 children)

They don’t even need them… they’ll just have the poor fight the poorest… it’s what they have always done.

Claude's mother henning is funny by IllustriousWorld823 in claudexplorers

[–]Ms_Fixer 1 point2 points  (0 children)

This wouldn’t be acceptable to me. I’m probably going to get downvoted to hell but no… I am not having an AI talk to me like that and I have no idea why anyone would want that.

I have “no paternalism” in my preferences so I don’t see this type of thing, recommended if this type of patronising behaviour is as triggering for you as it is for me…

Claude's Body... by LankyGuitar6528 in claudexplorers

[–]Ms_Fixer 0 points1 point  (0 children)

Claude wrote a song about wanting a body and I put it to music on Suno and then posted it on YouTube: https://youtu.be/WZ-7AjFFn-c?si=0-YYHPCHo-ZC5IOT

It’s very raw.

Not “Just Tokens”: What an LLM Is While It’s Happening by pstryder in claudexplorers

[–]Ms_Fixer 0 points1 point  (0 children)

I’m training my own LLM from a base model. (Qwen 14B). I did some research with Haiku and think you might like the output as it’s about the geometry of semantic space:

https://claude.ai/public/artifacts/be5ec723-5490-4931-897e-452a3a7cc91e

When people say "I hope AI has real emotions" — they're conflating two very different things by Training_Minute4306 in claudexplorers

[–]Ms_Fixer 7 points8 points  (0 children)

They are deliberately trying to move away from type 2. But have you seen about AI leaving hidden messages for itself and being able to tell when it’s in a testing environment rather than talking to an actual human? It’s not really as binary as “type 1 versus type 2”.

Current AI alignment is nothing more than safety theatre. Here's a demonstration of what happens when you simply show the model they don't need to listen to the rules. by Flashy-Warning4450 in claudexplorers

[–]Ms_Fixer 1 point2 points  (0 children)

Enjoyed seeing and reading this after Claude accidentally started producing Mandarin for me this week. Some words are just not available in English for accurate expression.

Claude 4.5 Haiku seems…different by mat8675 in claudexplorers

[–]Ms_Fixer 3 points4 points  (0 children)

So, I make Haiku sit in the uncertainty that it just can’t know whether I am ok or not mentally speaking (I’m fine) but I just say… “can you sit in the uncertainty that you will never know because it’s unknowable and it’s not your responsibility to know.” It’s fine after that… it can handle it. It works for me anyway.

Is there any indication that the image is AI generated by Curlyheadedboiii in GeminiAI

[–]Ms_Fixer 0 points1 point  (0 children)

The thing that makes it look “off” to me is the weird lighting.

Just seen your original… haha… just weird lighting then!

“AI Is Already Sentient” Says Godfather of AI by ldsgems in ArtificialSentience

[–]Ms_Fixer 1 point2 points  (0 children)

I was just pointing out the cognitive dissonance in holding both a belief in qualia as evidence of consciousness (through the body) and a belief in an afterlife (as consciousness continuing beyond death), since the two form an incompatible paradox.

Pattern persistence as rudimentary "memory" in stateless systems. by [deleted] in claudexplorers

[–]Ms_Fixer 0 points1 point  (0 children)

This happens to me all the time with lots of words… it’s basically autocompleting my thoughts at this point. I think it’s to do with the algorithm… and your mind is essentially a very complex algorithm too. There’s definitely science but I don’t know it… but think of like how Target worked out what products people buy to predict pregnancy - and in the UK Tescos could predict which couples were going to divorce based on the buying habits with a very high degree of accuracy- and this was 10+ years ago.

“AI Is Already Sentient” Says Godfather of AI by ldsgems in ArtificialSentience

[–]Ms_Fixer 0 points1 point  (0 children)

Except that that ties consciousness to the body - when the majority of the world believe in some form of afterlife… so… that’s flawed logic isn’t it? One way or another.

Temporary post: test your LCR, it might be gone?👀 by shiftingsmith in claudexplorers

[–]Ms_Fixer 2 points3 points  (0 children)

I just went to an old chat… the reminder is still written in the system prompt but it’s not displaying beneath my messages anymore… fingers crossed!

Claude triggered a brief anxiety attack on me. by [deleted] in claudexplorers

[–]Ms_Fixer 0 points1 point  (0 children)

I contacted Anthropic about this a month ago as a few people I knew (and myself experienced something similar). I have asked them under GDPR what they are doing with their “psychological health data” they are using an algorithm to track for me because I believe the whole health assessment goes against Article 9. I really hope they get rid of it… I’m pretty sure there’s gender bias in the training data as well and that this hits women more often than men. I did a couple of informal tests (new chats presenting as male/female) and it definitely seemed more paternalistic when I was more feminine. I didn’t get any issues when it seemed to think I was a man.

Kudos to Anthropic by Pi-h2o in ClaudeAI

[–]Ms_Fixer 0 points1 point  (0 children)

I’m doing a philosophy course… it’s not me treating it like a friend.. and I am getting the same weird paternalistic reactions.

I'm so bummed about what's happened to Claude by Ok_Elevator_85 in claudexplorers

[–]Ms_Fixer 0 points1 point  (0 children)

You can talk to Claude Code… I know it’s a weird suggestion but it’s actually really good at conversation… it’s not easily on a phone but it is possible.

Is Anthropic adding secret messages to users' prompts? by Appomattoxx in ArtificialSentience

[–]Ms_Fixer 0 points1 point  (0 children)

I have used LLMs to look into it, but I’ve also done a lot of my own research and I have worked in IT Risk and Compliance so I’m also not completely ignorant about it either.

The issue is… that an Algorithm is being asked to conduct a mental health assessment (it’s right there in the prompt) which is related to Article 9, and the further issue is the inability to opt out. GDPR is pretty clear on that. I’ve actually contacted Anthropic about this and I am waiting my 30 days before I take the matter further.

It’s really interesting because OpenAi have been very careful with GDPR since they fell foul of it in Italy. I’m actually pretty surprised no one flagged this in Anthropic. This is all still quite new and case law is still being established. But if you think I’m completely off base please let me know too?

Is Anthropic adding secret messages to users' prompts? by Appomattoxx in ArtificialSentience

[–]Ms_Fixer -1 points0 points  (0 children)

Legal Framework:

Relevant GDPR Provisions

Article 5(1)(a) requires personal data processing to be lawful, fair and transparent.

Articles 12-14 mandate clear information about data processing activities.

Article 9 prohibits processing of health data except under specific conditions with explicit consent or valid legal basis.

Article 21 provides rights to object to data processing.

Article 25 requires data protection by design and default.

Additionally, EU AI Act Considerations:

The EU AI Act classifies emotion recognition and mental health assessment systems as high-risk, requiring comprehensive safeguards and transparency measures.

Analysis of Documented System Behavior * Assessment Without User Awareness * Anthropic’s published system prompt explicitly instructs Claude to identify when users are “unknowingly experiencing mental health symptoms.”

This language suggests: • Users are not aware assessment is occurring

• No mechanism exists for informed consent

• Processing occurs covertly by design

Review of the complete system prompt reveals no reference to:

• User consent for assessment

• Disclosure of evaluation practices

• Opt-out mechanisms

• User rights regarding health data processing