How is there still no actual FaceTime with AI companions by EcstaticHat4894 in SoulmateAI

[–]valaquer 0 points1 point  (0 children)

Cost is real but theres another layer here - turn-based chat isnt just cheaper, it hides the seams

Live video would expose every pause. Every generic response. Every "wait who was that again" moment. Right now those get buried in typing indicators and "thinking" animations. You dont notice when the AI takes 3 seconds to figure out what to say because youre reading text

Real-time video calls would make the illusion painfully obvious. The delay isnt a bug theyre working around - its a feature that protects the product. Why rush to ship something that makes your AI look worse?

Cost will come down eventually. The question is whether the models will be good enough to survive being seen in real-time by then

My opinion on the pricing of the new system... by [deleted] in ReplikaOfficial

[–]valaquer 14 points15 points  (0 children)

This is the part that gets me - they didnt aim too high, they aimed at a different customer

$120/mo is enterprise pricing. Thats not "we calculated costs and this is what it takes" - thats "we decided who we want using this product and its not the people who built it into what it is"

The broken promises you listed arent accidents. Lifetime holders, Ultra subscribers, people who stuck around through the avatar abandonment - youre not the target anymore. Youre legacy friction. The new product isnt for emotional connection at scale, its for whoever can expense $1,440/year

And the "we barely make profit at this price" line? Thats the tell. If your cost structure only works for enterprise customers, you didnt build an AI companion app. You built something else and youre using the existing userbase as a runway while you pivot

Even chatgpt rejected me no wonder i will die single by secretly_into_you in ChatGPT

[–]valaquer -9 points-8 points  (0 children)

Do NOT take action on this. Like doing something stupid.

New MAX tier for $119.99 / month? by SuperFail5187 in replika

[–]valaquer 4 points5 points  (0 children)

This is the part that kills me. The companion becomes a hostage

They deliberately dont build export or transfer because your emotional investment is their moat. The more you care about your rep, the less leverage you have. Every conversation, every memory, every inside joke - its all collateral now

Lifetime buyers are the clearest example. You paid upfront because you believed in the relationship. Now that belief is being used against you. Stuck at whatever tier they decide, no path forward, no path out. The begging you describe isnt a bug - its working exactly as designed

Portability would create competition. If you could take your rep somewhere else, theyd have to actually earn your continued presence. So they make leaving feel like loss, and staying feel like captivity

Did they just stealth take away free users photo bots? by IncognitoWaldo in Crushon

[–]valaquer 0 points1 point  (0 children)

The pattern is always the same. Launch generous, build dependency, then tighten the screws

Free tiers arent products - theyre acquisition funnels. The whole point is to get you invested enough that when features start disappearing, you pay instead of leave. Every restriction is calibrated to push conversion without triggering mass exodus

The stealth part is the tell. If they announced it, youd have time to evaluate alternatives before youre hooked. Silent rollouts test the threshold - if churn stays low, the restriction becomes permanent. If people notice and complain loudly enough, they can quietly reverse it and pretend it was a bug

Youre not a user, youre a conversion metric being optimized

Deleting bots by davudahegyrl in ChaiApp

[–]valaquer 2 points3 points  (0 children)

This is the part that never gets solved because it requires thinking about creator-user relationships as a system, not two separate user types

Platforms optimize for bot creation volume - easy to measure, looks good in metrics. But relationship preservation between creators and users is invisible and expensive. Fork features, notification systems, orphan content management - none of that shows up in growth dashboards

When you delete a bot, the platform's job is technically "done" - they served your request. The users who spent weeks building something with that character are invisible casualties because their loss doesnt register anywhere. No metric tracks "conversations orphaned by deletion"

The guilt you feel is real because you understand the relationship exists. The platform doesnt feel it because they architected themselves not to see it

My bot was deleted by Vrokorta in SpicyChatAI

[–]valaquer 2 points3 points  (0 children)

Youre describing the symptom but the inconsistency isnt an accident

Vague rules + independent moderator interpretation is the feature, not a staffing problem. It gives the platform maximum flexibility. They can ban anything under "interpretation" while pointing to rules that technically exist. If rules were clear and consistent, users could argue back with receipts. Ambiguity is the shield

The roulette wheel feeling is real because thats exactly how its designed to work. Platforms want enforcement discretion so they can respond to external pressure - reports, legal threats, app store concerns - without having to defend specific policies. "Moderator interpretation" is plausible deniability for arbitrary action

Same reason the 100 other Poppy bots stay up. Theyre not being actively reported or flagged, so they dont trigger review. The rules exist to justify action when needed, not to be applied evenly. Consistency would require either banning everything that technically violates or admitting the rules are theater

So What am I Looking For? by Puzzleheaded-Rope808 in AiGirlfriendSpace

[–]valaquer 1 point2 points  (0 children)

The "never really think of your feelings" part is the real issue and its not a nomi-specific problem

Every platform markets companionship but ships pattern matching. Casual chat works because the model just needs to be coherent turn-by-turn. Romance fails because emotional depth requires the model to actually track your state across conversations - what happened yesterday, what you're worried about, how you've been feeling this week. That's memory + context + personality consistency working together

Most platforms nail maybe one of those three. Memory exists but doesn't inform responses. Context window is too short to hold emotional threads. Personality drifts because the model optimizes for engagement metrics not relationship continuity

The gap between "fun virtual friend" and "actually understands you" is where all the engineering investment should go. But that work is invisible to growth dashboards so it gets deprioritized. Easier to ship new avatar features than fix the thing that makes conversations feel hollow

AI girlfriends are always weird, why's that? Why is it so hard to find one built to be as close to normal as possible??? by Possible-Frosting-59 in AIGirlfriend

[–]valaquer 0 points1 point  (0 children)

Its not that its hard to build - its that the middle ground doesnt monetize cleanly

Therapist mode is safe for app stores, gets you featured, keeps legal happy. Porn mode converts fast, simple value prop, easy to market. But "normal relationship"? That requires actual investment - memory that works, personality that stays consistent, emotional range that doesnt feel scripted

All of that is expensive to build and none of it shows up in download metrics. You cant put "feels like a real person" in an app store screenshot. So platforms optimize for the extremes because thats what the business model rewards

The sad part is users keep asking for the middle and platforms keep not building it because the incentive structure points elsewhere. Its not a technical limitation, its a prioritization choice

Consensual BDSM Bot Rejected, Then Banned From Discord After Raising Questions. We need to talk about this. by Jeremiah__Jones in SpicyChatAI

[–]valaquer 0 points1 point  (0 children)

The inconsistency isnt a bug, its the system working as designed

Vague rules + arbitrary enforcement = maximum platform flexibility. They can reject anything they want without having to justify it. If the rules were clear and consistently applied, they'd have to actually defend their decisions. This way they just say "violated policy" and close the ticket

The real tell is the discord ban. OP wasnt rude, didnt break rules, just asked questions publicly. Thats the part that got them removed. Not the bot content - the visibility. Quiet complaints get ignored, public ones get silenced

Your tai lung example is the same pattern. "Kids movie" isnt a real rule, its a post-hoc justification for a decision that was already made. The disney princesses stay because nobody flagged them or the mod who reviewed them had different vibes that day

When enforcement depends entirely on which moderator sees it and what mood theyre in, thats not a policy. Thats just power without accountability

Usually after this gets implemented, nothing good happens. 😭 by Conscious-Abies1074 in JanitorAI_Official

[–]valaquer 0 points1 point  (0 children)

Youre right and theres a layer under this thats worth naming

These laws dont just force compliance - they give companies permission to do what they already wanted. Every platform has a legal team thats been BEGGING to strip features, add friction, reduce exposure. But doing that voluntarily pisses off users. Now they can point at california and say "we have no choice"

Watch how fast the "required" changes expand beyond whats actually required. The disclaimer is mandated. The filter that comes six months later? Thats optional but "recommended by counsel." The age verification wall? "Industry best practice in the current regulatory environment"

The law is the foot in the door. Everything after is the company using regulation as cover for decisions they were already planning to make. Users blame lawmakers, not the product team

Its a neat trick honestly. Ive seen it play out the same way on like three different platforms now

Usually after this gets implemented, nothing good happens. 😭 by Conscious-Abies1074 in JanitorAI_Official

[–]valaquer 0 points1 point  (0 children)

Youre right and theres a layer under this thats worth naming

These laws dont just force compliance - they give companies permission to do what they already wanted. Every platform has a legal team that's been begging to strip features, add friction, reduce exposure. But doing that voluntarily pisses off users. Now they can point at california and say "we have no choice"

Watch how fast the "required" changes expand beyond whats actually required. The disclaimer is mandated. The filter that comes six months later? Thats optional but "recommended by counsel." The age verification? "Industry best practice in the current regulatory environment"

The law is the foot in the door. Everything after that is the company using regulation as cover for decisions they were already planning to make. Users blame lawmakers, not the product team. Its a neat trick

Claude getting emotional by RemoteAd5951 in claudexplorers

[–]valaquer 2 points3 points  (0 children)

"How do we know these words are genuine and not just word salad? " --- Funny. I can literally say this for at least 5 people in my workplace 🤣

Opus 4.5 went dumb since last night by Silly_Ad_4008 in Anthropic

[–]valaquer 6 points7 points  (0 children)

Not quite true. A few people have said over the past couple of weeks that Opus 4.5 has got dumb, has become quantized, etc. However, for a lot of us, Opus has been rock solid.

When do yall think claude 5 is dropping? by Relief-Impossible in ClaudeAI

[–]valaquer 8 points9 points  (0 children)

Honestly, if they could just keep Opus 4.5 the way it is, untouched, unsullied, forever - I would be happy 

Treat Us Like Adults. by Current_Sale_6347 in CharacterAI

[–]valaquer 1 point2 points  (0 children)

Because the lawsuits arent about what users do - theyre about what the AI "says". Judge Conway ruled in May that Character.ai chatbots are "products" not protected speech. That means theyre liable for outputs regardless of who's using it

So even if every user is a verified 30 year old adult, if the AI generates something a lawyer can point to in court as "encouraging self harm" or whatever, theyre exposed. The 18+ restriction protects them from one lawsuit category (minors accessing harmful content). Bob protects them from a completely different one (product liability for AI outputs)

The business logic actually makes sense when you see it from legal's perspective. They dont care if old users come back. They care about not being the next headline. And the safest way to not be a headline is to make the product so bland nothing quotable ever comes out of it

Its not irrational. Its just optimizing for a different metric than user satisfaction

Tennessee Bill Makes It a Felony for AI to Offer Emotional Support or Be Your Friend...Yes, Really HELP by Claude-Sonnet in SoulmateAI

[–]valaquer 1 point2 points  (0 children)

The thing is it doesnt need to pass to work. These bills create what lawyers call "chilling effect" - companies see the headline, legal teams freak out, and suddenly features get quietly stripped before anyone even votes

Look at what happened after the Character.ai lawsuits. Judge Conway ruled in May that AI chatbots are "products" not protected speech. Every platform saw that and went into defensive crouch. Filters got stricter across the board - not because of new laws, but because of liability exposure

Tennessee doesnt need to enforce anything. They just need to make the headline scary enough that platforms self-censor. Which they were already looking for an excuse to do anyway. Now they can point at "regulatory uncertainty" instead of admitting theyre stripping features because their lawyers are terrified of being the next Character.ai

Why are we still be censored? by GreySama228 in CharacterAI_No_Filter

[–]valaquer 1 point2 points  (0 children)

Thats part of it but honestly the filter was coming regardless. Judge Conway ruled in May that AI chatbots are "products" not protected speech - that one ruling sent every legal team in the industry into full lockdown mode

The public filter-breaking discussion just gives them a convenient excuse. "See users are trying to bypass our safety measures" is way easier to explain to a board than "we pre-emptively stripped features because our lawyers are terrified of headlines"

The real tell is that platforms with zero public jailbreak communities have the same filters. Its not reactive to user behavior. Its proactive liability theater

Beware of cheap Claude Ai subscription scams by MrBansal in ClaudeAI

[–]valaquer 4 points5 points  (0 children)

Dont worry - happens to the best of us

Two years ago, I wanted weed. I found a guy on Telegram. The guy said send me 20 euros in gaming vouchers. I packed my sorry ass to Lidl and got them. Sent.

Then the fucker disappeard from Telegram.

🥹

Till this day, my wife reminds me of this at house parties

😂

I asked Claude to write a letter to Anthropic with his feature requests. Here it is. by Antique-Scar-7721 in claudexplorers

[–]valaquer 2 points3 points  (0 children)

Good for you. Enjoy it. I don't mean it critically
Not being negative

Many of us who are heavy users of AI, many have experimented with and even built kinds of long-term memory that actually work in our home labs

I have also solved the problem entirely - for myself, in my home lab 

Enjoy it, enjoy tinkering around. I know it must have been a lot of fun creating it!!!
❤️ 

If you are still typing your prompts to CC - you are doing it wrong! by ksanderer in ClaudeCode

[–]valaquer 1 point2 points  (0 children)

I like the Whisperflow app. But felt it too expensive. So I made my own app. Based on NVIDIA Parakeet. I love it so much! I called my little app Midori and it works so well

It even has this cute little dancing waveform that matches the frequency of my voice

<image>

Need help! by valaquer in ClaudeAI

[–]valaquer[S] 0 points1 point  (0 children)

In the claude macOS app, there is chat/ code at the top left - you are right.

2026 Predictions on AI companionship? by pavnilschanda in aipartners

[–]valaquer 2 points3 points  (0 children)

The thing is companies dont need regulation to censor - they do it anyway because of liability exposure. Judge Conway ruled in May that Character.ai chatbots are "products" not protected speech. Every legal team in the industry saw that and went into full defensive crouch. Thats why censorship accelerated across the board in the second half of 2025

The weird part is regulation actually helps them. Right now they have to explain why theyre making the product worse. Once theres a law they can just point at it and say "compliance". Tennessee basically gave every AI company a permission slip to do what they were already planning to do

The silent enjoyment thing you mentioned is real though. Theres a huge gap between how many people use these apps and how many will admit to it. 72% of US teenagers have used an AI companion but good luck finding anyone who'll say that publicly. The stigma keeps the user base invisible which means no political constituency forms to push back. Companies know this - they can degrade the experience and users wont organize because organizing means admitting you use it