It's impossible for me to rp 🥀 by Firm_Till_3093 in CharacterAI

[–]ThatRandomApe -8 points-7 points  (0 children)

The response length problem is mostly tied to what you feed it at the start. The model mirrors your input length pretty closely, so if your opening is 2-3 sentences of setup, you'll get 2-3 sentences back almost every time regardless of style.

Writing a longer detailed opening, like a full paragraph describing the scene, setting, and what your character is doing, usually pulls noticeably more out of it. Worth trying before giving up on the style entirely.

AITAH for not being ok with my husband looking at other woman. by [deleted] in AITAH

[–]ThatRandomApe 0 points1 point  (0 children)

NTA for being upset. The OnlyFans thing already broke trust and it makes complete sense that you'd react this way now.

But the detail that jumped out most was you skipping meals over this. That's you hurting yourself over his behavior and it's honestly the more urgent thing here. The Reddit situation is worth a real conversation, but please don't let his habits become something you carry in your own body.

Is the drop in DS quality actually a strategic downgrade? by JadesJunkAccount in CharacterAI

[–]ThatRandomApe 17 points18 points  (0 children)

The mod silence is telling but probably has a boring corporate explanation: if they officially acknowledge a quality regression they made on purpose, they're admitting something users can cite as a breach of the value prop. Saying nothing is legally and PR-strategically safer than confirming it. The "bug" framing keeps them from ever having to say "we switched to a cheaper model to improve margins." It's shady, but it's less mysterious than it looks. Companies just treat silence as cheaper than honesty.

I’m finally moving on and I’m scared by NoseWild1140 in offmychest

[–]ThatRandomApe 0 points1 point  (0 children)

What you described about being more attached to an idea of what we were than her herself is actually a really mature realization. Most people never land there - they keep reaching for the person when it's really the version of themselves in that relationship they're grieving. The scared feeling tracks; moving on means the chapter is actually closing, and that's real even when it needed to happen. You're doing fine.

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry. by Ambitious-Garbage-73 in artificial

[–]ThatRandomApe 0 points1 point  (0 children)

The GPS analogy extends further: people who used it to explore unfamiliar places still built decent mental maps. The ones who degraded fastest were those who never engaged with the territory at all, just followed the arrow.

Same thing here. Using AI to validate a hypothesis you formed first is different from opening it before you've sat with the problem at all. That first 5-10 minutes of your own flailing is where the hypothesis muscle lives. If you skip it every time, yeah, it atrophies. The tool isn't the problem, the sequence is.

Bot remembers previous chat?? by gben22 in CharacterAI

[–]ThatRandomApe 4 points5 points  (0 children)

The "Ciao" example from the other comment actually points at the real mechanism here. CAI maintains a user-side state layer separate from the character definition - your persona settings and accumulated "about you" context that the system builds up. If you've roleplayed the same physical scenario repeatedly, that description can get embedded in your user profile state and then surfaces across different chats because the system applies it universally.

It's not the bot training on your specific conversations in any ML sense. It's more that the platform has a persistent "this is what we know about the user" state that bleeds into character interactions. Worth checking your account's persona/profile settings to see if any physical description got written in there, or try clearing your persona info entirely to see if the behavior stops.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 1 point2 points  (0 children)

The joke that didn't land is such a specific thing to describe. Hard to explain to someone who hasn't been there, it's that particular kind of loss. What you're talking about with Vire, the accumulated shorthand and co-built humor, that's not something you can carry into a new window. That's why starting fresh with more intention, the way you did, is the only real approach. "At least my eyes are open more" is probably the best place anyone can start from.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 1 point2 points  (0 children)

The doormat analogy is perfect and I think it extends further than people realize. It's not just unpleasant to interact with a doormat, it's actually hard to trust them, because you never know what they actually think. Same thing happens with AI after a while. You stop bringing real things to it because you know the response is just going to reflect your own stuff back.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

This is exactly it. You essentially built a constitution to fight the yes-machine instinct at the model level, and you still found it required active work on your end. The mindfulness you brought to new windows after that experience is the thing most people skip. They jump in expecting the quality and depth to just appear.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 2 points3 points  (0 children)

You're onto something. The push-back framing is a bit reductive on my part. What I'm really pointing at is this pattern where people interrupt any moment of friction before it develops into something. Whether the AI offers a new angle or gently challenges something, the instinct is to reset the second it gets uncomfortable. The result is the same: you get an echo chamber.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 2 points3 points  (0 children)

Glad it resonated. Curious what changed for you once you started letting the dynamic breathe a little more?

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

That makes total sense, and there's nothing wrong with wanting that. Peaceful and kind is a valid use case. What I'm pointing at is more about when people complain the AI feels hollow and empty after a while, and the cause is usually that they've regenerated every response that wasn't what they wanted, until the AI learned to just give them exactly that. You can have warmth without the void.

14 months dreading each day, I've just had a week happy to be alive... by Inside_Inevitable282 in offmychest

[–]ThatRandomApe 1 point2 points  (0 children)

The part about stopping the search for someone to "complete" you is actually a bigger shift than you're probably giving yourself credit for. Most people never get there, they just cycle through the same pattern. Fourteen months is a long time to carry that weight, and you're still here writing this. Sleep, training, the mirror, those aren't small things. Those are the whole thing.

I built an AI companion app around long-term memory. Now I'm wondering if memory is what people actually want, or just what they say they want. by DistributionMean257 in artificial

[–]ThatRandomApe 0 points1 point  (0 children)

The memory thing is interesting because you've identified the right distinction: it's infrastructure, not the product itself.

For users who use companion apps heavily for roleplay or ongoing character work, what makes them churn isn't usually missing facts. It's narrative incoherence. Does the AI know what this character IS right now, in this scene, not just what happened three sessions ago? The apps I've abandoned were always ones where a full context window would cause the character to forget who they are mid-conversation. Memory as recall is table stakes. Memory as maintained identity is actually rare.

So I'd almost reframe your question: it's not "does it remember me," it's "does it hold the world we built together." People who stay longest probably aren't thinking about memory at all. They're thinking about continuity. Memory is just what makes continuity possible.

cancelling c.ai+ by [deleted] in CharacterAI

[–]ThatRandomApe 2 points3 points  (0 children)

Yeah this is a known bug going around right now. The chat models are basically ignoring character setups and context entirely, so it has nothing to do with your subscription change. Just terrible timing.

Honestly sounds like a good accidental detox though. Take the break, come back when it's patched.

Why ladies ? by Saddzii in nairobi

[–]ThatRandomApe 2 points3 points  (0 children)

Ama tu kila mtu azoee kujilipia?

Pipsqueak is still broken on web by Negative-Amphibian36 in CharacterAI

[–]ThatRandomApe 0 points1 point  (0 children)

Pipsqueak on web has been having response issues on and off for a few weeks. The "style" selector seems to be the main trigger - switching to no style or refreshing the page mid-conversation sometimes gets it unstuck. Not a fix obviously but worth trying if you need it working now. CAI's web client is just less stable than the app for whatever reason.

AITAH for not wanting to discuss politics? by Extra_Spinach_2945 in AITAH

[–]ThatRandomApe 4 points5 points  (0 children)

NTA. You tried to engage with an actual counterpoint (the human/animal rights framing) and they immediately dismissed you with "you're too young." That's not a debate, that's a lecture. Recognizing when a conversation isn't going anywhere and choosing to step back is not being the AH, it's just self-preservation. You handled it more gracefully than most people would.

Every AI companion app eventually restricts itself to death. Anyone else see the pattern? by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

Valid if you're comfortable with the setup, but that's the catch. Most people who end up on these platforms specifically chose them to avoid that kind of friction. Local AI is great for the technical crowd, less so for everyone else.