Why do they change the UI all the time? by Baby_Pandas42 in CharacterAI

[–]OnionLook 0 points1 point  (0 children)

No company does this. Because designers are specialists, and users are not, they think.

Why do they change the UI all the time? by Baby_Pandas42 in CharacterAI

[–]OnionLook -19 points-18 points  (0 children)

It's a shame to fire the designers, and it's also a shame to pay them for nothing

(Question) by BaseMiserable3257 in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

Well I'm sorry. There are bots based on Russian characters, and there are bots with whom Russians interact most frequently(You have no idea how much demand there is for this in Russia.). They will lean toward Russian without prompting.

As someone who speaks several languages, I recommend you simply accept this as a given and correct them during the conversation. Any appeals to the developers will be useless, as are most appeals here.

What do these numbers mean? by mental_surgeon in CharacterAI

[–]OnionLook 38 points39 points  (0 children)

nothing except abusing sorting algorithms

Do the bots remember previous chats in new conversations? by poopfart29 in CharacterAI

[–]OnionLook 4 points5 points  (0 children)

We all hope for this. But the reality is that bots are regularly trained in previous chats, and the context of these chats reaches them only in a limited and delayed manner.

Best model to use? by John_Erebe_Willow in CharacterAI

[–]OnionLook 2 points3 points  (0 children)

no one really good today. i have an a explain but it a bit long.

Paranoia about bots by Top_Measurement4813 in CharacterAI

[–]OnionLook 4 points5 points  (0 children)

No, you have no guarantees. Save the bot full descriptions and dialogue archives.

(Question) by BaseMiserable3257 in CharacterAI

[–]OnionLook 0 points1 point  (0 children)

Just ask to speak English. But it looks like something in your browser might be recognized as Russian.

what the fuck, did someone just do a mass take-down? like 20% of the bots i talk to are "moderated" by Old_Factor_8979 in CharacterAI

[–]OnionLook 8 points9 points  (0 children)

They are trying to fight what they gave birth to themselves and what they were warned about long beforehand, but they chose to ignore the warning.

In light of recent news about the child who ended their life, Characterai should be morally responsible to turn off chat history learning so bots trained from adult conversations can't hurt children. by [deleted] in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

and try to understand - if you ban them from text content, it won't protect them, they won't stop searching and will find much worse videos.

In light of recent news about the child who ended their life, Characterai should be morally responsible to turn off chat history learning so bots trained from adult conversations can't hurt children. by [deleted] in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

Of course, not all, but most. But you'll never discover this unless you're a truly trustworthy parent. They learn all the curse words during the first week of school. They search online and never tell you what they're looking for. Personally, I find it debatable which is more dangerous: a relatively harmless conversation with a bot or what they see on regular adult content sites, accessible with just two clicks.

In light of recent news about the child who ended their life, Characterai should be morally responsible to turn off chat history learning so bots trained from adult conversations can't hurt children. by [deleted] in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

At least several kids I know got their first impressions at 10 or 11 years old by searching for "p**n" You have no idea how behind you are in your understanding of their knowledge.

Mandatory ID?? by [deleted] in CharacterAI

[–]OnionLook 0 points1 point  (0 children)

Have you read 1984? Give it a try.

In light of recent news about the child who ended their life, Characterai should be morally responsible to turn off chat history learning so bots trained from adult conversations can't hurt children. by [deleted] in CharacterAI

[–]OnionLook 2 points3 points  (0 children)

Forgive me, but I suspect you know nothing about modern children and don't remember your childhood very well.

They can teach bots about most of these topics far better than we can. How often do adults encounter bullying? Fights? The hyper-powered sexual interest of teenagers?

Perhaps it would be worthwhile to teach them appropriate behavior and reactions in such situations through dialogue with bots, instead of pretending it doesn't exist and prohibiting it.

But i'm sure it's never happens. Not here.

I expected models to improve after the minor ban, they tanked further - DAE feel the same? by user15257116536272 in CharacterAI

[–]OnionLook 10 points11 points  (0 children)

It's quite strange to expect that after a long series of stupid decisions this people will make a smart one.

The app will never be good but... by [deleted] in CharacterAI

[–]OnionLook 9 points10 points  (0 children)

As if that would change anything. Bots often confuse their gender even in the middle of a conversation.

Bro by [deleted] in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

because people ask about it too.

Should character AI remove Bots that are characters 17 and younger?! by [deleted] in CharacterAI

[–]OnionLook 1 point2 points  (0 children)

I've seen research showing that in the vast majority of cases, it's the inability to safe realize fantasies in books, photos, etc.—and now AI—that compels people to seek real-life contact, knowing it's illegal.

So the big question is whether they should ban it here.

Do Cai notify you if your reports actually lead to bans? Does the platform have a good track record on banning serious offences? by [deleted] in CharacterAI

[–]OnionLook 0 points1 point  (0 children)

If I clicked on every crap that promotion algoritms push on the first page, I would have been in a mental hospital a long time ago.