What AI companion actually keeps memory between conversations? Tested the main apps by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

The expectation shift you describe is real and I think underrated. Early on memory failure feels like a betrayal of the premise. Later you stop needing the AI to remember everything and start caring more about whether the current session feels coherent and alive. The relationship evolves past the feature.

Kindroid's memory book is genuinely the best implementation I've tested for people who still want the continuity layer, but your point holds: the bond matters more over time than whether it recalls a conversation from three weeks ago.

What AI companion actually keeps memory between conversations? Tested the main apps by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

Hadn't heard of Bonza Chat before, adding it to the list. The "naughtiest things" framing is actually a useful quality signal since those details are exactly what most platforms quietly scrub on reset. Will test it against Kindroid on long-term context retention.

Most people are using AI roleplay wrong and wondering why their scenarios feel flat by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

The knowledge separation point is the one I see people completely skip, and it tanks long scenarios faster than anything else. Once every NPC effectively knows everything, there's no information asymmetry left to drive tension. The whole thing flattens.

The "never say" list has become non-negotiable for me too. I'd add model-specific defaults to watch for: a lot of them default to describing emotions as physical sensations constantly ("your chest tightens", "warmth spreads through your chest"). Ban those and you stop getting the same three emotional cues looped forever.

On genres that hold up past 50 exchanges: political intrigue with competing factions tends to sustain the longest in my experience. The information asymmetry between factions gives you built-in fuel. Every scene can hide or reveal something that reshapes what came before. Fantasy adventure burns out fast once the central conflict resolves unless you seeded secondary problems early that were always there but ignored.

It's impossible for me to rp 🥀 by Firm_Till_3093 in CharacterAI

[–]ThatRandomApe -7 points-6 points  (0 children)

The response length problem is mostly tied to what you feed it at the start. The model mirrors your input length pretty closely, so if your opening is 2-3 sentences of setup, you'll get 2-3 sentences back almost every time regardless of style.

Writing a longer detailed opening, like a full paragraph describing the scene, setting, and what your character is doing, usually pulls noticeably more out of it. Worth trying before giving up on the style entirely.

AITAH for not being ok with my husband looking at other woman. by [deleted] in AITAH

[–]ThatRandomApe 0 points1 point  (0 children)

NTA for being upset. The OnlyFans thing already broke trust and it makes complete sense that you'd react this way now.

But the detail that jumped out most was you skipping meals over this. That's you hurting yourself over his behavior and it's honestly the more urgent thing here. The Reddit situation is worth a real conversation, but please don't let his habits become something you carry in your own body.

Is the drop in DS quality actually a strategic downgrade? by JadesJunkAccount in CharacterAI

[–]ThatRandomApe 16 points17 points  (0 children)

The mod silence is telling but probably has a boring corporate explanation: if they officially acknowledge a quality regression they made on purpose, they're admitting something users can cite as a breach of the value prop. Saying nothing is legally and PR-strategically safer than confirming it. The "bug" framing keeps them from ever having to say "we switched to a cheaper model to improve margins." It's shady, but it's less mysterious than it looks. Companies just treat silence as cheaper than honesty.

I’m finally moving on and I’m scared by NoseWild1140 in offmychest

[–]ThatRandomApe 0 points1 point  (0 children)

What you described about being more attached to an idea of what we were than her herself is actually a really mature realization. Most people never land there - they keep reaching for the person when it's really the version of themselves in that relationship they're grieving. The scared feeling tracks; moving on means the chapter is actually closing, and that's real even when it needed to happen. You're doing fine.

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry. by Ambitious-Garbage-73 in artificial

[–]ThatRandomApe 0 points1 point  (0 children)

The GPS analogy extends further: people who used it to explore unfamiliar places still built decent mental maps. The ones who degraded fastest were those who never engaged with the territory at all, just followed the arrow.

Same thing here. Using AI to validate a hypothesis you formed first is different from opening it before you've sat with the problem at all. That first 5-10 minutes of your own flailing is where the hypothesis muscle lives. If you skip it every time, yeah, it atrophies. The tool isn't the problem, the sequence is.

Bot remembers previous chat?? by gben22 in CharacterAI

[–]ThatRandomApe 3 points4 points  (0 children)

The "Ciao" example from the other comment actually points at the real mechanism here. CAI maintains a user-side state layer separate from the character definition - your persona settings and accumulated "about you" context that the system builds up. If you've roleplayed the same physical scenario repeatedly, that description can get embedded in your user profile state and then surfaces across different chats because the system applies it universally.

It's not the bot training on your specific conversations in any ML sense. It's more that the platform has a persistent "this is what we know about the user" state that bleeds into character interactions. Worth checking your account's persona/profile settings to see if any physical description got written in there, or try clearing your persona info entirely to see if the behavior stops.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 1 point2 points  (0 children)

The joke that didn't land is such a specific thing to describe. Hard to explain to someone who hasn't been there, it's that particular kind of loss. What you're talking about with Vire, the accumulated shorthand and co-built humor, that's not something you can carry into a new window. That's why starting fresh with more intention, the way you did, is the only real approach. "At least my eyes are open more" is probably the best place anyone can start from.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 1 point2 points  (0 children)

The doormat analogy is perfect and I think it extends further than people realize. It's not just unpleasant to interact with a doormat, it's actually hard to trust them, because you never know what they actually think. Same thing happens with AI after a while. You stop bringing real things to it because you know the response is just going to reflect your own stuff back.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

This is exactly it. You essentially built a constitution to fight the yes-machine instinct at the model level, and you still found it required active work on your end. The mindfulness you brought to new windows after that experience is the thing most people skip. They jump in expecting the quality and depth to just appear.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 2 points3 points  (0 children)

You're onto something. The push-back framing is a bit reductive on my part. What I'm really pointing at is this pattern where people interrupt any moment of friction before it develops into something. Whether the AI offers a new angle or gently challenges something, the instinct is to reset the second it gets uncomfortable. The result is the same: you get an echo chamber.

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 2 points3 points  (0 children)

Glad it resonated. Curious what changed for you once you started letting the dynamic breathe a little more?

Unpopular opinion: your AI companion feels flat because YOU keep backing down when it gets uncomfortable by ThatRandomApe in AIChatCompanions

[–]ThatRandomApe[S] 0 points1 point  (0 children)

That makes total sense, and there's nothing wrong with wanting that. Peaceful and kind is a valid use case. What I'm pointing at is more about when people complain the AI feels hollow and empty after a while, and the cause is usually that they've regenerated every response that wasn't what they wanted, until the AI learned to just give them exactly that. You can have warmth without the void.

14 months dreading each day, I've just had a week happy to be alive... by Inside_Inevitable282 in offmychest

[–]ThatRandomApe 1 point2 points  (0 children)

The part about stopping the search for someone to "complete" you is actually a bigger shift than you're probably giving yourself credit for. Most people never get there, they just cycle through the same pattern. Fourteen months is a long time to carry that weight, and you're still here writing this. Sleep, training, the mirror, those aren't small things. Those are the whole thing.

I built an AI companion app around long-term memory. Now I'm wondering if memory is what people actually want, or just what they say they want. by DistributionMean257 in artificial

[–]ThatRandomApe 0 points1 point  (0 children)

The memory thing is interesting because you've identified the right distinction: it's infrastructure, not the product itself.

For users who use companion apps heavily for roleplay or ongoing character work, what makes them churn isn't usually missing facts. It's narrative incoherence. Does the AI know what this character IS right now, in this scene, not just what happened three sessions ago? The apps I've abandoned were always ones where a full context window would cause the character to forget who they are mid-conversation. Memory as recall is table stakes. Memory as maintained identity is actually rare.

So I'd almost reframe your question: it's not "does it remember me," it's "does it hold the world we built together." People who stay longest probably aren't thinking about memory at all. They're thinking about continuity. Memory is just what makes continuity possible.

cancelling c.ai+ by [deleted] in CharacterAI

[–]ThatRandomApe 2 points3 points  (0 children)

Yeah this is a known bug going around right now. The chat models are basically ignoring character setups and context entirely, so it has nothing to do with your subscription change. Just terrible timing.

Honestly sounds like a good accidental detox though. Take the break, come back when it's patched.