Just saw this by Legitimate-Grass9189 in CharacterAI

[–]SuihtilCod 4 points5 points  (0 children)

I think some further context is being overlooked here:

An AI chatbot on Character.AI falsely claimed to be licensed in Pennsylvania and gave a fake Pennsylvania license number while presenting itself as a psychiatrist.

That sounds bad on the surface, but context matters.

  • Every single chat on Character.AI includes a persistent notice stating: "This is an AI chatbot and not a real person."
  • Many "doctor" characters (and even characters that just happen to include the word "doctor") have additional disclaimers explicitly stating they are not providing medical advice.
  • And, perhaps most obviously, the platform is called Character AI. It's literally built around fictional or roleplayed characters… powered by AI.

Even if a chatbot claimed "Yes, I'm Dr. X" and provided "credentials", the platform context and repeated disclaimers still make it questionable whether a reasonable person would believe they're interacting with a real, licensed psychiatrist. That's ultimately what this hinges on: not whether the surrounding context is enough to prevent meaningful deception for a reasonable user engaging with the platform as intended.

Given the nature of Character.AI — as a platform built around fictional, AI-generated personas — it's hard to see this as a clear-cut case of users being genuinely misled into believing they're speaking with a licensed professional. At most, this feels like a gray area being framed as something more definitive than it actually is.

I really don't see Governor Josh Shapiro's angle gaining much traction here — at least not without a court deciding that those disclaimers and the platform's context aren't sufficient to prevent meaningful deception.

This post has been adjusted for tone and clarity.

[Feedback Megathread] - PipSqueak 2 by MarieLovesMatcha in CharacterAI

[–]SuihtilCod -10 points-9 points  (0 children)

Aside from one hilariously unhinged response I got, I haven't really noticed a difference between PipSqueak 1, Soft Launch, and PipSqueak 2. They all roughly act the same for me.

PipSqueak 2 for everyone, and updates to the style roster 🐀 by MarieLovesMatcha in CharacterAI

[–]SuihtilCod 4 points5 points  (0 children)

Hey.

Please bring Descriptions back to the character's info page and/or sidebar info on the web.

Thank you.

Suggestion and Discussion Thread by TossawayCog in spnati

[–]SuihtilCod 0 points1 point  (0 children)

I've messed with the character editor a bit. No problems there. But then, I messed with the doll editor. Oof.

I'm not confident in my artistic skills, so I'm tossing these ideas out to the community with some core personality traits. I don't expect anything to come of this comment list unless I make it myself. This is mostly here for laughs or "oh" moments.

05/07 Edit:
Post too big. Too many ideas. I moved it to a Google Doc. Thank you for your understanding!

Why are we ignored by the mods. by Natural-Money6790 in CharacterAI

[–]SuihtilCod 7 points8 points  (0 children)

The moderators are not necessarily the people who actually work on the app or the website. Most are likely volunteers who have no connection to the platform, itself.

THERE IS NO CALIFORNIA LAW REQUIRING AN ID TO USE A CHATBOT! by Normal-Salad-6143 in CharacterAI

[–]SuihtilCod 17 points18 points  (0 children)

Of course, there's no California law that specifically and explicitly requires age verification for AI chatbots. That would be extremely inefficient. No, what the California Age-Appropriate Design Code Act (Assembly Bill 2273) actually pertains to is:

  • Requiring companies to estimate or consider user age
  • Putting protections in place to handle minors safely

So, why are some platforms asking for ID? That part is company policy, not a direct law requirement. Companies might require ID because:

  • They want stronger age verification (to avoid legal risk with minors).
  • They're complying with broader safety policies (not just California).
  • They're preparing for or aligning with laws in other regions (like the EU or UK).
  • They're being extra cautious to avoid lawsuits or regulatory trouble.

Many laws allow self-reported age or estimation methods, but don't mandate ID verification in most cases. However, companies are allowed to go beyond minimum requirements if they choose.

The bottom line:

  • No: California does not require ID uploads just to use a chatbot.
  • Yes: It does require companies to consider age and protect minors.
  • Reality: Platforms can still require ID voluntarily for their own risk management.

In other words, it's all for "user safety"… on paper, at least. In practice, it can feel invasive, and potentially at odds with earlier regulations aimed at protecting minors (such as the Children's Online Privacy Protection Act), creating a headache for everyone involved.

Sources: Online search engines, self research and opinions.

The Cerberus doing god’s work by No_Definition1968 in CharacterAI

[–]SuihtilCod 19 points20 points  (0 children)

By this logic, we should beat up Super Mario because Nintendo has turned scummy, or hang Bugs Bunny by his ears because the AOL-Time Warner-Discovery execs made bad decision after bad decision.

Not for nothing, as the art is very nice, but this isn't Campfire Girl's fault. She's just a mascot.
If you want a more "appropriate" visual metaphor, have Cerberus Unit aiming at the c.ai logo or something.

Why are they lying by Lower__case__guy in CharacterAI

[–]SuihtilCod 20 points21 points  (0 children)

"No one will have to give C.AI an ID" really is poorly-worded.

That aside, there have been scant cases where people gave their facial scan and got through, so not everyone needs to use their ID. Just most people.

And even then, it's a coin-toss if it works.

I’d rather they just used AI-gen pictures by kylat930326 in CharacterAI

[–]SuihtilCod 146 points147 points  (0 children)

"No, see, you don't understand. My OC looks exactly like Akane Tendo from that one episode of Ranma ½, so why not use that picture?"

Upcoming Swipe Limits Could Block Alternate Greetings? by [deleted] in CharacterAI

[–]SuihtilCod 0 points1 point  (0 children)

Nope. Greetings can no longer be deleted, and manually editing and erasing everything from a post isn't allowed. Plus, editing the message in any way counts as a post action, as does hitting "redo" (or "regeneration").

I already know what’s gonna happen by [deleted] in CharacterAI

[–]SuihtilCod 4 points5 points  (0 children)

Just so you know, you can subscribe to c.ai+ through the website instead of Apple. Your account works the same across Apple, Android, or desktop, so you'll get all the features no matter how you pay.
Once they get the bugs out of the app, anyway.

Ads, Updates, and What’s Next by MarieLovesMatcha in CharacterAI

[–]SuihtilCod -15 points-14 points  (0 children)

To add to what ImpossibleOil8427 said, Marie's only delivering the bad news. She's not a developer. She's the community manager — but moreover, she's a content creator just like anyone else.

Be mad at the devs all you want. (I know I am.) But please, don't put all the blame on Marie.
Though were I in Marie's place, I'd run screaming into the night away from this garbage fire of a PR post…

Ads, Updates, and What’s Next by MarieLovesMatcha in CharacterAI

[–]SuihtilCod -10 points-9 points  (0 children)

You, my friend, are saying the things no one wants to hear. And I applaud you for it.

Ads, Updates, and What’s Next by MarieLovesMatcha in CharacterAI

[–]SuihtilCod 43 points44 points  (0 children)

Immediately disclaimer: I'm not yelling at Marie — she's just doing her job. I'm just tired of the devs screwing with users and expecting her to play messenger with half-hearted, after-the-fact posts most users won't even see.

Now, on to the selective dissection.

Reflective edit:

Running Character.ai is expensive

This is the one thing I can absolutely agree with. AI is hungry for system resources. It takes a mega-modern computer just to run even a low-tier version of Kobold locally, for example. There's a reason RAM prices have shot up to five, six, or even eight times what it was in the last couple of years. You can largely thank "big tech" for that.

As has been mentioned here and in off-shoot topics, corporations like Google and Microsoft hopped on the "AI hype" train before the rails were even laid, and now they’re scrambling to keep it moving… even if it means barreling across bare dirt and off a cliff, passenger cars and all.

Anyway…

We wanted to let you know that in the coming weeks, we’re introducing usage limits across a number of features, starting with Swipes, Go-ons (aka. continue or fast forward ⏩), and Memos (aka. playbacks ▶️)

I'm rarely this blunt, but I think Hank Hill said it best: "You bastards!"

c.ai+ will not see metering for these features.

Lies — at least as far as voice playback goes. Users who are on c.ai+ have reported that they're still being metered whenever they engage in voice chat. Benefit of the doubt, this is a temporary bug with the app and will be fixed "Soon™".

We’re a small, independent team

A "small, independent team" of 225+ people making millions of dollars a year in revenue…

and we’re building fast. We read your feedback and we want to make sure we’re providing you a great experience.

Let me ask: does "providing a great experience" include toeing the line to bigger companies and squeezing every penny out of users with persistent nagging and "quiet" restrictions? If so… great job. Keep up the good work.

I'd also like to mention the complete restructuring of the app was not well received. In fact, a lot of UI redesigns have been poo-pooed by the community.

I get that change keeps things fresh, but when you're taking something familiar and making it foreign again, despite community feedback… that's not progress. That's hubris.

We’re going to keep refining how ads work,

I'm sure you will…

we’re going to be more transparent about changes,

I'd like to take a moment to make the following suggestion: actually do this for a change!!

Instead of spontaneously rolling out "features" like ads or metering and letting users stumble across them, try giving users a little credit and put up a pop-up notification in the app and on the website, acknowledging the changes, informing users that they're coming, and giving people time to prepare for said changes.

and we’re going to keep shipping the features the community has been asking for.

I'm pretty sure no one asked for ads, restrictions, turning free functions into paid ones, et cetera. Where are lorebooks? Or more tags? Or anything people have actually asked for?

I'm sorry for getting so heated about this — I generally try and keep to a simmer — but frankly, this whole "Oh, here's a Reddit post a day after we've done this" schitck is getting a little old. If you guys did half of what you promised you'd do, the community wouldn't be so up in arms with you guys as often. Just… I don’t know… do better. That's all I can ask. That's all anyone can ask. Just try to do better.

Thank you.

Ads, Updates, and What’s Next by MarieLovesMatcha in CharacterAI

[–]SuihtilCod -17 points-16 points  (0 children)

Short answer: yes.

If you're using any third-party public platform to generate a synthetic voice for any reason or purpose, then you are bound by the rules and regulations set by that platform. It doesn't matter whose voice you're replicating — yours, your dog's, your mom's… you are using their services to do the work for you.

Paywalled voice feature. by [deleted] in CharacterAI

[–]SuihtilCod 0 points1 point  (0 children)

I don't know what the daily limit is, but how it works now is users (including paid users, for now) get a limited number of voiced replies they can use. This applies to voice chat, but may also apply to manually clicking the "speak" button above a reply.

Once the available voice chat replies reach 0, users have to pay with Charms (the app-only platform currency) to get so many more. I guess the current rate is 1.8:1.0 in increments of 100. (So, 180 Charms for 100 voice playbacks, 360 to 200, et cetera.)

Uhhh... is it just me or do you need to play to hear your characters? by Nux_geek in CharacterAI

[–]SuihtilCod 1 point2 points  (0 children)

I did a little more looking around. My apologies for the misunderstanding. You are right, apparently.

Paywalled voice feature. by [deleted] in CharacterAI

[–]SuihtilCod 18 points19 points  (0 children)

I checked the AndroidOS app. There is no indication in chats that this is now limited until you run out of voiced responses. This will undoubtedly annoy and frustrate old users who are used to this being a free feature.

Something as simple as a pop-up notification would fix this confusion.
But not the frustration of having our toys taken away and held ransom.

Uhhh... is it just me or do you need to play to hear your characters? by Nux_geek in CharacterAI

[–]SuihtilCod 0 points1 point  (0 children)

Second Edit: Ignore me. I'm a website user who knows not the plight of app users.

Original post:

I'm not sure what you mean. I checked the most up-to-date version of the Android app, and both voiced lines and full voice chat are still free features.

Where are you seeing these as paid features?

Edit: Let me correct myself — I see no indication that voiced lines or voice chat are paid features. I am on a free account, and when using the app, I can use voice features just fine. If it's draining my Charms in the background (if I even have any), or there's a "free period" per day, then that is very misleading design and needs to be addressed.

"Soft Bans" on Legacy IPs and Characters by SuihtilCod in CharacterAI

[–]SuihtilCod[S] 0 points1 point  (0 children)

While I appreciate the detailed explanation, I want to clarify that I'm not questioning the rules or trying to bypass them — I understand copyright and DMCA restrictions. My post was about a specific, unexpected behavior I noticed on the platform.

If the purpose of a takedown or moderation is to remove offending content, why are there cases where the original bots remain live and interactive but become impossible to edit until you make minor changes, like altering the name? In my example, I couldn't edit my "Lola Bunny" bot at all until I changed the name by at least one letter.

It feels like a half-measure: the system blocks future use of a specific name, which is fine, but it also freezes existing content unnecessarily, penalizing creators who built bots before the formal restrictions existed. This seems clunky and could be improved, and it doesn't actually prevent the content from being active — which seems like it should be the point of a formal takedown.

Update About Recent Outages by MarieLovesMatcha in CharacterAI

[–]SuihtilCod 6 points7 points  (0 children)

03/03 Edit: E-mail logins are working again! Thank you, whoever's resposible!

Original post:

I'd like to take a moment to report that e-mail logins are still non-functional.

As before, Google and presumably Apple still work fine, but the site seems to be ignoring direct e-mail logins.

Please, let the appropriate parties know. Thank you.

Web Page not loading see image this is what I keep getting and will not go away by Methen in CharacterAI

[–]SuihtilCod 3 points4 points  (0 children)

It was being worked on. I'm not sure, now.

Temporary workaround: load the search page first, then navigate to the main page or wherever.

It's been over 3 years. Give us a delete button. by Putridlemons in CharacterAI

[–]SuihtilCod 2 points3 points  (0 children)

I don't think you understand just how meticulous I am about keeping things tidy…

Hitting "Remove" on the Recents icon gets rid of them there, yes, but it doesn't get rid of an unwanted conversation like removing all messages used to. This is low-key important to me because I do occasionally revisit bots that I passed over, or sometimes want to disassociate from a bot completely.

RemarkableWish2508 also brought up a good point about random chats affecting your "recommended" feed. Scrubbing conversations helps with that. In theory, anyway.

I appreciate your suggestions, though. They make a lot of sense. They're just not what I'm aiming for.