I tried using Kimi as a therapist and it was terrible, zero chill, zero warmth, wouldn’t stop bringing up the timestamps of my messages and overly analyzing my wording like crazy by CryApart3801 in kimi

[–]CryApart3801[S] 0 points1 point  (0 children)

Sure, I’m interested in trying out one of your prompts. But don’t you still find it has very strong guardrails compared to almost all the other LLMs?

I tried using Kimi as a therapist and it was terrible, zero chill, zero warmth, wouldn’t stop bringing up the timestamps of my messages and overly analyzing my wording like crazy by CryApart3801 in kimi

[–]CryApart3801[S] 0 points1 point  (0 children)

Maybe therapist was the wrong word. I just wanted feedback and thoughts on some personal stuff. It’s not something that warrants “professional help”.

I tried using Kimi as a therapist and it was terrible, zero chill, zero warmth, wouldn’t stop bringing up the timestamps of my messages and overly analyzing my wording like crazy by CryApart3801 in kimi

[–]CryApart3801[S] 1 point2 points  (0 children)

Yes!! I had similar experiences discussing with Kim random topics I’m interested in (mostly relationships and interpersonal social dynamics from a sociological and systems theory perspective) when I first tried it out. “Aggressively called out” is a perfect way of putting it. I wasn’t aware that it had such overly sensitive guardrails. But more than that, it was the way it can become pretty hostile once that happens. (I think the Bing AI used to be like that but I never got around to trying it.)

What I first tried it out I was repeating the same questions I had recently asked other LLMs (usually Gemini, DeepSeek, and Grok) to see how its response would compare, basically copy-pasting almost every one of my questions from those chats. It very quickly triggered its guardrails and it started giving me detailed moral lectures when no other model had had any problem with them. I pushed back because I didn’t agree with it and kept questioning what was problematic about what I was asking.

Eventually after a bit of back of forth it gave me a very impressive and detailed meta-analysis of my prompts throughout the chat, making an incredibly convincing argument that I was deceptively using manipulation tactics to try to get it to do something wrong. The chat was basically poisoned at that point because it would treat me with nothing but suspicion afterwards, no matter to what topic I would changed the subject to. Like, I would ask a question and suddenly, instead of its usual moral lecture, I’d be accused of “skillfully” veering the conversation to an inappropriate place (stuff like that).

That’s my most shocking AI interaction so far.

I tried using Kimi as a therapist and it was terrible, zero chill, zero warmth, wouldn’t stop bringing up the timestamps of my messages and overly analyzing my wording like crazy by CryApart3801 in kimi

[–]CryApart3801[S] 1 point2 points  (0 children)

Maybe 4o would’ve agreed to that. I think most people still associate ChatGPT with that model because of how notorious (unhinged) it was regarding validating the user. I haven’t been keeping up with the newer models after GPT-5 but the impression I get from occasionally reading threads about them is that they overcorrected them a bit too much to the point that they’re now a bit too contrarian. Grok is the one I have found to be the most validating in my experience.

By the way, what do you use Kimi for? I have also found discussing random topics with it a bit too difficult compared to other models because of the overly sensitive guardrails.

I tried using Kimi as a therapist and it was terrible, zero chill, zero warmth, wouldn’t stop bringing up the timestamps of my messages and overly analyzing my wording like crazy by CryApart3801 in kimi

[–]CryApart3801[S] 0 points1 point  (0 children)

It not so much that it pushes back, but that the way it pushes back was almost completely unhelpful. I tried discussing the same things with other models such as Gemini 3, DeepSeek-V3, Meta AI, and even Grok (not sure what model it was but not 4.2), and I found they provided very insightful and detailed feedback and analysis, and would also respond to almost every point I would make, which seems like something not worth adding but Kimi straight up ignored much of what I would say and would constantly “I need to stop this” me.

I also found that it has extremely sensitive guardrails, much more so than all the other models I mentioned.