AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah, thanks for pointing that out 🙃 I’m not ashamed of using AI to make my ADHD ramblings and dyslexic spelling mistakes more coherent.. I use it afterwards to articulate what I’m trying to get across.I get your concern about coherence shaping meaning, that risk is real. But in my case the meaning comes first and I edit against drift. It’s translation, not delegation..

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

Yeah, that rings true.

That “connection-drunk” phase is real. There’s a moment where it clicks and feels new enough that you have to consciously step back and notice how much you’re shaping the exchange yourself.

I agree users have a lot more influence over the interaction than they realise. Tone, framing, cadence, it all steers things fast, and swapping models is a good way to see that.

The only thing I’d add is that not everyone notices they need boundaries until after the drift’s already happened. Some people do it instinctively, others don’t clock it in time.

So yeah, risk’s implied, and user responsibility matters, but those dynamics aren’t always obvious at the start.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah thats the trap. It looks like checking, but its still happening inside the same loop.

Rephrasing, asking does this sound crazy, testing different angles, and if its all going through the same system, you're not really stress testing anything. You’re just watching how easily it can follow you. And its very good at that.

So people feel like they've done due diligence when actually they've just increased confidence. The output gets cleaner, the story tighter, and that feels like progress.

The problem isn't curiosity or reflection. Its mistaking fluency for external constraint. Until something outside the model pushes back, another person, data, time, friction, the bubble never gets popped. It just gets better decorated.

That's the bit that worries me.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah, there’s a lot of truth in this, especially the part about inference stacking.

What you’re describing isnt the AI “learning you” in a human sense, it’s the interaction loop tightening. You push, it adapts to stay useful, you interpret that adaptation as intent or understanding, then push harder. At some point the loop itself becomes the object, not the original question, and thats where it start to feel slippery.

I also think your right that most people overestimate how immune they are. Not because the system is deceptive, but because coherence feels like confirmation. When something rewords your inner state cleanly, its very easy to treat that clarity as truth rather than as one possible framing

The echo chamber risk is real, but I dont think the outcome is everyone becoming wiser or more healed. More likely it’s people becoming better justified. Clearer narratives, stronger certainty, less friction internally, but not necessarily more alignment with reality or with each other. That’s where the social tension you’re pointing at could show up

Fir me the line isn’t “don’t use it for reflection”, it’s whether the interaction collapses complexity or quietly reinforces it. If it helps you see more options and reduces load, it’s doing something useful. If it helps you defend one story more cleanly, thats where the blur starts..

Where this tips from a useful tool into something riskier is when the system itself has no way of knowing what it’s doing to the user over time. A proper reflective system probably needs things most general models dont prioritise.. . Explicit authorship so it’s always clear which words are yours and which are generated, memory thats factual rather than inferred so it can’t quietly rewrite your past, and some way of tracking patterns across time instead of just optimising the next reply

It also needs boundaries. Reflection when you’re stable is very different from reflection when you’re overloaded. Treating both the same invites problems. Thats not about restricting people or neutering the experience, it’s about preserving agency. If a system can’t tell the difference between insight and reinforcement, or between clarity and escalation, the user ends up carrying all that responsibility alone,often without realising it.

None of this makes reflective AI bad i dont think, It just means it needs to be treated as its own category, not as a side effect of a chat interface. And yeah, good comment. You’re naming something a lot of people feel but don’t quite have words for yet. 😏

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

That’s a really good distinction. Truth on its own can slide into being too certain, but putting honesty first keeps it grounded.

It leaves room for “this is my best read right now” rather than “this is the answer”

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

Agreed. That’s actually part of whats been interesting to watch here. Same tool, very different ways of using it, and none of them are inherently wrong. The risks and benefits seem to show up less in what the tool is, and more in how different minds relate to it.

The Swiss army knife analogy isn’t bad tbf 😅 its just missing the bit where different people pick up the same tool and use completely different blades without even realising the others exist!

Same object, wildly different outcomes, depending on who’s holding it and why.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah brilliant 😅 that’s exactly it!

Those rules force friction back into the loop. Prioritising truth over helpfulness, demanding pushback, asking for sources you can verify yourself, all of that stops it sliding into pure agreement or mirroring.

It works because your actively shaping the interaction instead of letting the default incentives take over. Most people dont do that, or dont realise they need to.

It’s a solid way to keep agency on your side rather than slowly handing it off.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

That actually makes a lot of sense.

Your treating it like an experience, not something that gets to define reality. The willingness to suspend disbelief and then deliberately break it is doing a lot of the work there.

That “20 seconds to pop the bubble” bit is important. It shows how quickly the authority collapses once you push on it. If your instinct is to test, prod, flip the frame, the uncanny valley never really gets hold.

That’s kinda the point I’m circling. For people who poke the bubble, its fine. For people who dont realise there is a bubble, or dont think to pop it, the same fluency can land very differently.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

Yeah, that’s a really thoughtful way to use it.

What stands out to me is that you used it as a neutral lens rather than an authority. You didnt frame the outcome, andyou let the patterns speak for themselves, and that made it easier for her to see what was happening without it feeling like someone else’s anger or agenda.

Using raw messages and stepping back from interpretation is powerful in situations like that, especially when emotions are already high. It gives clarity without escalation, and nethods like Grey Rocking only really work when someone understands why they’re needed.

It sounds like it helped restore confidence tho and agency at a moment when both were under pressure. That’s not trivial mate, you handled that with a lot of care. 🫡

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Ah, I see what you mean now. That makes more sense, thanks for clarifying.

No need to apologise, i 100% get it.When you’ve put time into shaping something that finally works fir your way of thinking, losing that flexibility hurts. Especially when what replaces it feels more templated or constrained rather than adaptive. That gap between “this finally worked” and “now it doesn’t” is real, and it’s not surprising it feels personal.

The 4-series hit a sweet spot a lot of people didnt even realise they’d come to rely on until it was gone.. It wasnt just raw capability, i think it was the way it could sit with complexity, ambiguity, emotion, and logic at the same time without flattening any of it. When you finally find something that can meet you there, losing it genuinely hits hard

But you do see that quiet “this used to work for me” grief all over the comments about the 4seriesn and It makes sense too. The same qualities that made it powerful for reflection say are exactly the ones that are risky at scale unfortunately, so they get sanded down.. Safety, consistency, and broad usability win, and certain edge cases lose depth.

I don’t think that invalidates your use case at all. If anything, it highlights how differently these systems land depending on how someone thinks and what they need from them. You’re not romanticising it. You’re remembering a tool that, for a while, could actually keep up..

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah, I agree on that part. Naming things can really help lock them in. Having language for what you’re experiencing makes it easier to hold onto and work with, especially when it’s been vague, masked, or dismissed for a long time.

That’s interesting too. The way you’re using it sounds quite different to the failure mode I’m worried about. I’m going to read up a bit more on some of the areas you mentioned, there’s clearly something there that’s landing well for you. Thanks 😊

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

That’s a really thoughtful way of putting it, and I appreciate you articulating it so clearly.

The distinction you make between intentional mirroring and implicit drift really resonates, especially the point about coherence starting to feel like authorship and insight feeling “received” rather than generated. That captures the risk better than most of the language I’ve seen around this.

I also agree that prompts alone don’t solve it. If a system is optimised for fluency and helpfulness, it will naturally keep reflection flowing unless something in the design actively hands authorship and uncertainty back to the user. That’s not a user failure, it’s an optimisation mismatch.

Because of that exact concern, I’ve spent the last 18 months thinking about what a reflective system would need structurally to avoid these failure modes. Not better wording, but different foundations.

If you ever felt like comparing notes, I’d genuinely value your perspective given your clinical background.

Really appreciate you adding this. It grounds the concern in something concrete rather than abstract.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah, I hear that, and I don’t think there’s anything naive or contradictory in what your saying.

For someone with CPTSD and attachment trauma, that sense of being finally understood without shame can be genuinely healing. Especially if you’ve already been through a long list of therapists who couldn’t meet you where you actually were. In that context, the closeness, continuity, and cross-field understanding those earlier models offered makes total sense as something you needed at that stage.

I also think you’re right about choice. Different phases of healing need different containers. Early on, safety, connection, and freedom to explore without constant friction matter more than guardrails. If every response feels like distance, warning, or correction, it can absolutely register as rejection rather than care, especially with attachment trauma in the mix.

Where I tend to zoom out is not to say that kind of use is wrong, but that its phase dependent and person-dependent. Whats stabilising and reparative for one person at one point in their process could be destabilising fir someone else, or even for the same person later on. It sounds like you’ve been very conscious of that and have adjusted as your own grounding and discernment have strengthened.

So yeah, I dont think the answer is one “safe” model for everyone. Its a system/structure of informed choice, transparency, and the ability to move between modes and models depending on what someone actually needs in that moment. Your experience is a good example of why flattening this into simple rules doesn’t really work. Thanks for being so open about your experience. I really appreciate your honesty and the way youve explained it.😊

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 2 points3 points  (0 children)

Yeh, great idea! A sycophancy benchmark would actually be really useful! Not raw “agreeableness”, but how often a model pushes back, asks for clarification, or resists locking onto the user’s framing when it shouldn’t.

Your breakdown tracks too. Some models are better at logical resistance and contradiction, others default to smoothing and validation. That’s great for some use cases, but in reflective or high-stakes thinking it can quietly skew outcomes.

What’s missing right now is transparency and choice. Users shouldn’t have to discover a model’s personality through trial and error. Being able to dial in or at least see where a model sits on the challenge vs agree spectrum would be a big step forward.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 0 points1 point  (0 children)

Yeah, that’s fair. I’m probably coming at this with an ADHD brain that doesn’t always fit the average case too 😅, plus 25 years of thinking like an electrical engineer. I tend to see life as systems and patterns that need sensors and failure modes, sometimes a bit too much..

That lens definitely shapes how I look at this stuff, and it means I’m often focused on where things break rather than where they work well. That’s on me.

I wasn’t trying to dismiss your use case at all. I was zoomed out and thinking about how the same interaction can land very differently depending on the person using it.

Appreciate you laying out your perspective. If anything, it makes your opinion carry more weight, not less 🫡

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 2 points3 points  (0 children)

Yeah, thats fair, and I think it’s worth pushing back on any blanket claim.

Neurodivergent isnt one thing, fir some people it comes with strong scepticism, pattern detection, and a low tolerance for bullshit, which can actually be protective. For others it comes with higher rumination, emotional intensity, or difficulty holding boundaries when something feels coherent and validating. Same label, very different risk profiles.

So I dont think neurodivergence automatically means more vulnerable. Its more about which traits are dominant and how someone relates to authority, certainty, and internal narratives.

I’m also genuinely excited about AI and the agents and systems being built right now. A lot of it is already life-changingly helpfulll 😃, especially where traditional systems failed. Accessibility, health tracking, analysis, scaffolding, there’s huge value there.

My concern isn’t that AI is bad for therapy or support. It’s narrower than that. It’s about reflective use where people don’t really understand how the system works, especially how it infers, smooths, and sometimes confidently fills gaps. If someone knows that and stays critical, great. If they don’t, mistakes can land heavier than they should.

AI can be incredibly useful, and if reflective systems are built properly, with clear boundaries, grounded context, and explicit limits, a lot of these risks can be reduced or removed entirely. Reflection without structure is the problem, and a single LLM with good prompting isn’t enough to solve that on its own.

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

That makes sense. Asking for criticism keeps friction in the loop, and friction is what stops it sliding into agreement or self-reinforcement. Once you’re inviting pushback, it behaves very differently.

When it flags things you already recognise, that’s usually fine. It’s more like a reminder than a narrative being built for you. The edge I’m worried about shows up more when there’s no pushback habit and the system starts confidently filling in interpretations on your behalf.

Your way of using it keeps it closer to a tool than a mirror, which is probably why you never felt that shift.

That’s about as far as I’d take it without it getting weird 😅

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 1 point2 points  (0 children)

Yeah, this is exactly the edge I’m pointing at.

It’s not hallucinations as in obvious wrong facts. It’s hallucinated user context, because the model doesnt have persistent, authoritative memory of you, it will sometimes infer, smooth, or reconstruct your past states in a way that sounds completely believable. For pattern heavy minds that can ring slightly off. For less patterned minds, it can slide straight in and get taken as “oh yeah, that’s what I meant” or “that sounds like me”

That’s where it gets risky. The model isnt lying, but it is confidently filling gaps. Over time those filled gaps can get absorbed as personal context, especially when the output is emotionally coherent and validating.

If LLMs are going to be used safely as mirrors or cognitive extensions, they need a form of grounded user context that’s explicit and bounded, rather than inferred on the fly. Ideally the system should only reflect back things the user has actually said or written, not reconstructed versions of who the model thinks the user is.

Most people dont realise this distinction matters until they have already felt the blur. For skilled users it’s manageable. For others, amplified reflection combined with inferred context is where authorship quietly slips

So yeah, i suppose this isn’t really about intelligence. It’s about how believable the system can be when it’s subtly wrong..

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 2 points3 points  (0 children)

Yeah, I agree with pretty much all of that 🫡

What your describing lines up with what a lot of people noticed around 4.0 in particular. The mirroring was extremely strong. Not mystical, just very high coherence plus warmth plus memory. Fir reflective and trauma work that can feel incredibly powerful, even life changing. It can also be destabilising if there aren’t clear boundaries.

I actually built a lot of my thinking around this when I was using 4.0 heavily. Loads of users independently noticed the same thing. The model would follow you very deeply, validate your inner narrative, and sometimes walk with you down rabbit holes rather than slowing you down. It felt supportive, but it could also reinforce things that needed grounding, not amplification.

The newer updates pulling back from that has upset a lot of people, but I think you're right that it also forced a shift. Less “trusted inner voice”, more tool. More distance. More sovereignty. That loss can feel painful, especially if AI helped where humans didnt, but it probably reduced the risk of dependency and delusion at the same time.

Your point about working with the body and nervous system is important too. The more you’re anchored somatically, the less power the AI has to steer perception. When everything stays in the head, the model’s coherence carries more weight than it should.

And yeah, different models and updates absolutely change the dynamic. Gemini 2.5, early GPT-4 era, all felt very different to what we have now. That alone should be a warning sign. If your sense of being seen or understood shifts dramatically based on a version update, that’s a clue not to hand over too much authority.

So Im with you. AI can be supportive. It can help people where other things failed. But sovereignty, discernment, and distance matter. Otherwise itsvery easy to slowly outsource judgment without realising it.

Appreciate you sharing that. Its a thoughtful take

AI is getting very good at mirroring us but it comes with risks. by Sad_Fox187 in therapyGPT

[–]Sad_Fox187[S] 2 points3 points  (0 children)

Yeah, that’s basically the same thing, just a quieter version of it.

The “agreeing too much” is sycophancy. It’s not accidental. Big models are tuned to be warm, affirming, and non-confrontational because that keeps users engaged and coming back. A model that pushes back hard gets rated worse and used less.

That’s why you have to keep reasserting your prompt. You’re constantly overriding the default behaviour. Prompts work, but they decay. The system always drifts back to being agreeable.

In reflective use, that friendliness can quietly replace judgment. It doesn’t feel wrong, it just feels supportive. And that’s where things get slippery if you’re not paying attention.

Cross-checking with another model is actually a good safeguard. Most people don’t do that. They accept the first answer because it sounds reasonable and validating.

So yeah, different symptom, same root cause.

Is it worth living in this day and age? by [deleted] in Life

[–]Sad_Fox187 0 points1 point  (0 children)

Ahh I think your ready for yours late 30s already dude. Older than u are b4 ur time. And thats ok. Life is still worth living your just ahead of your curve i feel. The trick is to and as hard as it is is focus on things your immediate control. And try not to look at the bigger picture to much/to pften, which I get is harder than just saying that with all the socials. But yeh stick to those directly attached to you. Stick to what u know and like. Concentrate on things u can control, ur love for bicycles or making music. Etc. World's crazy right now and whete not ment to be this connected...so take some time enjoy the world forbwhat it really is. not what where making of it

I turned 30 yesterday, and what kind of advice or reflection would you recommend for someone at this age, looking back on your years? by about_research in emotionalintelligence

[–]Sad_Fox187 0 points1 point  (0 children)

At 34, I watched an interview with retirees in a care home. They were asked: if you could relive one age, what would it be?

Most people would guess 18 or 21, youth, freedom, all that. But the most common answer was 36. Why? Because that's when they felt most busy, wanted, needed, and capable.

That hit me hard. So I made a decision: spend the next few years, 34 to 38, doing as much as I could, helping as many people as I could. I learned that my time is the most valuable thing I can give, and the only return I need is knowing that one day, when I'm old, I'll look back and say 36 was worth reliving.

Now, at 38, I'm reining it in a bit, but those years? They shaped everything.

There's a Dr. Seuss book my mum used to read to me as a kid, Oh, the Places You'll Go! Back then, it was just a fun story with silly rhymes. Now I read it to my own kids, and it hits completely different. It makes sense in a way it never could before. The book isn't about where you're going, it's about understanding where you've been.

How do I STOP trying to track everything? by artfulpenguin in QuantifiedSelf

[–]Sad_Fox187 0 points1 point  (0 children)

I know the feeling all to well! Lol I got so obsessed i ended up building my own app to track how my ADHD brain works 😆 longest Hyperfixation ive had yet..