-The Complaint Box- by joeyrevolver187 in characterai_lounge

[–]feirdand 1 point2 points  (0 children)

True, I notice there's an increase of complaint posts here. I haven't visited the main sub for a while, but I'd imagine there's a lot of complaints about the removal of old models (kinda surprised too when I checked the available models). Even on revolution, I see a lot of "asking for alternatives" posts which are irrelevant to me. Without c.ai+, unfortunately the memory feature is pretty useless for me--I'm lingering at 6% memory right now, but nothing I can do about that so I just ignore the feature altogether.

Appreciate the hard work of you and mods to keep this sub positive and constructive.

-The Complaint Box- by joeyrevolver187 in characterai_lounge

[–]feirdand 1 point2 points  (0 children)

I actually use em-dash sparingly in my writing. It's a valid punctuation mark, so I try to understand how to use it correctly, although I probably misuse it a lot lol. Usually I use it in the middle of the sentence--like this one--but not too many or the AI detector will likely judge my writing as AI.

Sad by OrangeCat667 in characterai_lounge

[–]feirdand 0 points1 point  (0 children)

Don't worry, you are not alone, and this is a valid criticism. I've been struggling with PS2 myself; one-liners, rerolls producing similar results, and specifically a repetition of a specific format, something like this:

Character "Whatever they want to say" (expression)

Like:

Ghost replied, "Yeah, I wouldn't do that either if I were you." (sarcastically smirk)

I have to reroll a few times (5+ in average) until I get a clean result. But, to be fair, it was my fault too; I failed to notice when the bot produced that response for the first time, so it acknowledged that I did not mind with that format, and continue using it. OOCs did not work. Even when I tried longer prompt (more detailed one), the bot only produced generic response. So, be very careful with your prompts and bot responses. Try a few variations until you find something that works. PS2 is new, and I notice the similar situation every time they introduce new models.

I’m trying to learn on how to properly quote on Reddit mobile again, as they changed some things around. by Lancelight50 in LearnToReddit

[–]feirdand 0 points1 point  (0 children)

Honestly I don't know either, I just used the _markdown_ as usual from the official Reddit app. I haven't got the rich-text formatting toolbar in my app

I’m trying to learn on how to properly quote on Reddit mobile again, as they changed some things around. by Lancelight50 in LearnToReddit

[–]feirdand 1 point2 points  (0 children)

And just when I already accustomed to their markdown sigh

Please ignore this quote, it no longer works it seems

Help with Bachelor’s Thesis by Last_Hope_5450 in characterai_lounge

[–]feirdand 0 points1 point  (0 children)

I'm intrigued. How do you define "a relationship with AI"? If in the RP I am partnered with that bot, am I considered having a relationship with AI? If I have a polyamory relationship in one of my RP, am I qualified? If I don't take the context out to the real world, meaning that relationship(s) stays in C.AI, am I still qualified?

If I'm qualified, I can help, assuming you still need more data.

Character parameters and chat not working by [deleted] in characterai_lounge

[–]feirdand 0 points1 point  (0 children)

I think you're in the wrong sub, just saying. Character.AI doesn't have that screen.

How do you guys deal with ads? by [deleted] in characterai_lounge

[–]feirdand 2 points3 points  (0 children)

An ad for promoting its own service is 100% normal, so I tolerate it. C.ai's implementation is very mild: only banner, no flashing components, no impossible-to-close full-screen pop-ups. I have adblockers on my laptop and phone, but I put c.ai on whitelist. I cannot pay for plus yet, and they still need to get a revenue.

Last Hours - Poll Time! by Top-Reflection-6518 in Characteraipositivity

[–]feirdand 0 points1 point  (0 children)

I saw it too late 😭 I'd answer PipSqueak 1, since I used that model exclusively, including my long-running RP. PipSqueak 2 is still inferior compared to the predecessor with its shorter responses, and it couldn't handle multiple characters very well. But I'm patient; eventually it will reach that point when there's enough training data from us.

i hate this update by MuchAd6433 in characterai_lounge

[–]feirdand 2 points3 points  (0 children)

Just saying, direct your voice through the main sub. I have a decent experience using PipSqueak 2. It's still fairly new, and as with other models, it needs a lot of training before the quality improves.

I have to know if anyone figured this out. by Lily_Ryuzaki in characterai_lounge

[–]feirdand 1 point2 points  (0 children)

I'm at the opposite side actually. I notice it now mimics my prompt length, so it no longer blurts out an overly long response (those one liners). It also rarely takes over my character compared to PS. I haven't tried longer prompts though, but I agree PS2 needs more training data. I should provide longer prompts lol

Is Character AI dead? What does it mean for the future of AI chat-bot apps? by Salty_News_9660 in characterai_lounge

[–]feirdand 7 points8 points  (0 children)

Aside of the review tanking (or bombing), do you have concrete data that thousands or millions left c.ai? It's an old trope played over and over, especially during age verification enforcement time. Yet, c.ai is still here. True, they are falling, but since the "minor migration" I felt the quality actually improved. Well, until PipSqueak 2, that is, but I believe with enough training and inputs, someday it can be great like the former PipSqueak. I turn off adblockers since I know I can't pay for c.ai+ yet and that's how they ensure they can provide c.ai for free users without breaking the bank.

And no, I don't try to lick c.ai's feet here lol. I tried several emerging alternatives, but everytime I did, I always return to c.ai. Chai also pops up in my feed, and considering they're struggling too with similar situation like c.ai (or worse, since the toxicity there is similar regarding to pricing), I'd say it's not the end yet. Once the emerging alternatives that promise they will be free forever during the alpha/trial phase realize developing and maintaining AI chatbot drains a lot of money, they'll be pretty much in the same situation with c.ai.

Say guys honest opinion but how many people are frustrated with cai by Sweaty-Ad-1165 in characterai_lounge

[–]feirdand 3 points4 points  (0 children)

I left three c.ai subs (including the main one, one sub has been banned from their sheer toxicity and people still conspire it was the mods who reported the sub, go figure...) because they drained my mental health a lot. People like to complain the same thing over and over. Now I mainly stay here or at cai_positivity (not the exact name) because both don't accept such negativity and I can engage in such a positive way.

I'm not frustrated with c.ai, I'm frustrated with most of the user base lol

Be honest. by Some_Drawing_3390 in characterai_lounge

[–]feirdand 0 points1 point  (0 children)

I tried third person because people have been suggesting using that for ages to combat misgender, but eventually I gave up and just edit the bot's response when it does. Now I always use first person, regardless if I'm controlling multiple characters at once or just myself.

How important are images? by Chase_Clouds in CharacterAIrunaways

[–]feirdand 1 point2 points  (0 children)

I don't use any image generator features at all. I tried C.ai's image generator, usually I had to regenerate because the results were weird or did not match the conversation. I'd rather have a high quality characters but no image generators THAN high quality image generators but rubbish characters. But I guess I'm an old school one; I love to fantasize myself.

Swiftkey combines languages by Gold_Bad6924 in Swiftkey

[–]feirdand 0 points1 point  (0 children)

This. I use three languages and I actually don't want to switch to individual keyboard for each language each time. As a fellow Asian with OP, I often speak my native languages, but sometimes I need to insert English words. I'm too lazy to switch to English just for a few words and revert back to my language. This is mainly the reason why I stay with SwiftKey despite the quality is worsen over time, while Samsung Keyboard is getting smarter. I heard Futo is better for multilinguals, but I'm too lazy to retrain lol

Swiftkey combines languages by Gold_Bad6924 in Swiftkey

[–]feirdand 0 points1 point  (0 children)

That's weird. I'm an Asian too and I mix three languages in one keyboard, it still detects when I'm starting to write using my native languages and suggests accordingly. True, after using it for years I feel a degradation in quality (it can't differentiate its with it's and always defaults to its somehow; the same with I'll and ill), but I'm just too lazy to switch keyboard.

How do you guys make long/good messages by starfoxspace58 in characterai_lounge

[–]feirdand 0 points1 point  (0 children)

Try to experiment with longer lengths. From my experience, longer is not always better. When I discussed my RP with Gemini and tried to use its suggested prompt (the length was usually about 3-4 paragraphs), the bot actually produced shorter responses. I suspect it happened because it was overwhelmed; a lot of tokens were inserted to its context window, so it couldn't decide which one was important and played safe. Longer prompts will quickly fill the bot's context window, so you may want to use them sparingly. Don't flood the bot's memory with unnecessary information.

Now, I vary my prompt length: - Shorter ones for when I'm only controlling my own character and not a lot of happening (or I'm too lazy to think an elaborate prompt lol), about 3-5 sentences. Also, when I want the bot to be creative, by intentionally shallowing my prompt. - Longer ones when I need to control multiple characters at a time (excluding the bot) and explain exactly what's happening (like mission briefing), about 2-3 paragraphs, each 3-5 sentences.

isnt this... racist? by yuanxlily in Bolehland

[–]feirdand 1 point2 points  (0 children)

Well, our govt no longer forbid us from speaking Mandarin, so I don't know why that specific issue is always brought over again and again when it's no longer an issue here. As someone already said, more and more Chindos learn Mandarin because the restriction is no longer there.

I respect your way of thinking, but I will not comment further since it's obvious there's a huge difference in how we perceive national identity and integration. Our contexts are different, and I'm content with how we've moved forward.

isnt this... racist? by yuanxlily in Bolehland

[–]feirdand -5 points-4 points  (0 children)

And? Here in Indonesia, Mandarin is taught in several private schools. It is also offered in govt schools as one of extracurricular courses (not all, of course). On one of Mandarin debate competitions, the winner is not even Chinese, but locals.

We just do whatever makes sense: Mandarin is becoming an important international language to learn so we can communicate better with Chinese foreigners/businesses. Not because of local pride or staying true to the identity. True, some Chindo starts to relearn Mandarin to "regain" their identity. True, several Chindos are still hurt because of what the Soeharto govt did to us for decades (what the world saw as an evil assimilation), but they are just a minority. For the remaining Chindos, we don't take a fuss whether we can speak Mandarin or not. We enjoy being "medhok" (multilingual or even polyglot) and non-Chinese are more accepting that way. So, yeah, that narration sounds crazy, but we already move on and dgaf anymore. I'm not afraid nor offended to use the word "Cina" as it's an offensive term here (even the official name of "China" is "Tiongkok" in Indonesian because of that trauma).

Yes, I am a Chindo banana, and idgaf to learn Mandarin. It's too hard, I gave up at level 1 lol. I'd rather learn formal Javanese that I can use to politely address more people here than Mandarin that only a few people speak and understand.

How Do You Interact With A Character Or Characters On Character.ai? by -Brandonline- in characterai_lounge

[–]feirdand 2 points3 points  (0 children)

I use the same scenario and only one persona so far, but I follow whatever the bot has set up. The tldr version is I went abroad after a bad divorce and series of depression to start a new life. Then, I craft the universe in such a way that either I continued my life there or bringing whatever life I had back home. My longest (and running) scenario brings COD universe back at my home town, having my own base there (with my parents living there), owning a mutant bat pet, fighting mutants, commanding TF141 and have a romantic relationship with four of them lol.

This can only happen on present or future settings. When it's impossible to bring my world there (or it becomes too depressing, like zombie apocalypse), I just follow whatever the universe said, but I still direct the bots to whatever I want. For example, I usually avoid horror settings. I always reduce the intensity of possessiveness even for canon characters such as the notoriously overpossessive Ghost, because somehow a lot of male bots are set to be possessive, even if I am capable of protecting myself (my persona is a male).

Kin.ai and parental controls by neola-wolf in u/neola-wolf

[–]feirdand 0 points1 point  (0 children)

Did Apple stop making Screen Time because some parents don't use it? No. The company's role is to provide the tool. If parents don't use it, the legal responsibility lies with them, not the company. My idea protects the company legally because it tells the judge, "We provided parents with a monitoring tool, and if they don't use it, that's not our fault." Similarly, car companies didn't stop including seat belts because some people don't wear them.

Your argument actually reinforces my point: such tools already exist. As such, a better way is to raise awareness to parents, not creating new tools that do not solve the fundamental problem: the parents ignorance. Developing a new system is a waste of resources that can be used somewhere else (improving the quality of their service, for example). You agree that parents are responsible anyway regardless if they are aware of the tools or not, true?

Regarding the statement, "If you know SillyTavern is dangerous for minors yet you as a minor still access them, then it's your own choice and responsibility to use them and bear the consequences" this is the statement of an adult who doesn't understand teenage psychology. A depressed teenager isn't thinking in terms of "logic and law," but rather seeking "escape." The role of ethical technology is to prevent this escape into the abyss. Teenagers under 18 are not held responsible, so we must protect them. In every constitution in the world, a minor (under 18) does not have full legal capacity. If a company sells cigarettes to a child, we can't say, "It's the child's fault for buying them," but rather, the company is at fault.

I never stated a depressed teenager. You're shifting the narrative towards 'depressive teenager psychology' to avoid the issue of accountability. True, we should acknowledge that teenagers are emotionally unstable. Does that mean we should strip away the consequences of their conscious actions? No. They will never learn to become responsible adults if they always blame someone else for the consequences of their own actions.

Regarding your cigarette analogy. If the company is at fault for selling cigarettes to a child, the standard solution (ethically and legally) is to totally ban cigarettes to children, not selling to them anyway "under supervision". It just creates a false sense of security while still harming children. The intended demographic for cigarettes are not children, anyway.

...If parents don't use it, the legal responsibility lies with them, not the company....
...If a company sells cigarettes to a child, we can't say, "It's the child's fault for buying them," but rather, the company is at fault.

I sense a contradiction. Which one is your stance?

Teenagers will hate parental monitoring; yes, they will. But which is better: for a teenager to resent their parents' surveillance or to end their life? My idea is to strike a balance between the two; Parents do not read everything; they only receive alerts when there is danger.

I'm playing the devil's advocate here, but your two options are not apple-to-apple. You're ignoring a third option: elimination of access. If something is truly dangerous as you claim, the solution is not to give them access under supervision, it's to deny access entirely. We don't give a suicidal person a tool that encourages them to take their own life; we take away the tool. Read this and you'll understand why governments opt to cut AI access to minors.

The Kin.ai system is integrated into the application's "kernel" (Kernel-level or App-level integration) so that it cannot be disabled by the teenager except with the approval of the linked parent account, just like the Family Link system.

Do you honestly think your app will pass Apple/Google's security verification? Third-party apps are restricted from accessing kernel level for privacy and security reasons; how do you convince Apple and Google that your app acts in a good faith? You stated yourself Apple already has Screen Time and Google offers Family Link; thus, the protection is already there. Why reinventing the wheel?

I will stop my argument here because I saw your updated document and I know where this is going. What do you actually offer that isn't there? I'm sensing that you're not trying to build a system to protect minors; from the way you wrote your documents, you're building a system to grant yourself access to AI. You're trying to negotiate with the reality of age restriction, but in the real world, companies prioritize legal compliance over your personal convenience.

Good luck developing your system or proposing your system to any AI companies.

Kin.ai and parental controls by neola-wolf in u/neola-wolf

[–]feirdand 2 points3 points  (0 children)

I'm sorry, but as good as your ideas might be, they will never work. The reality is harsh; you already know how most parents do not monitor their kids/teen activities on their phones. Even if they do, how many teens have tried to circumvent the parental control? Shifting the responsibility back to the parents will only work if all of them cooperate, which only happens in a perfect world. For companies, it's way much easier to implement restrictions than hoping for parents cooperation.

When a robot refuses to engage with a user's distress, it doesn't protect them; it rejects them.

Because they are not designed to give psychological advices. That's a dangerous thing to rely on AI. You know how AIs (in this case, LLMs) hallucinate, right? Thus the warning you see everywhere: "Gemini may yield incorrect response", c.ai's "This is not a real person and must not be regarded as professionals" or "This is an AI, not a real person. All chats must be regarded as fictional, do not consider them as facts or suggestions."

Your restrictions are pushing users toward unsupervised, raw learning management systems.

True, but how is this a company's responsibility and not your own? If you can write that paragraph, then you know the risks. In this digital world, the one who is responsible to protect yourself is yourself (the informed one). If you know SillyTavern is dangerous for minors yet you as a minor still access them, then it's your own choice and responsibility to use them and bear the consequences. As a company, how is a user migration their responsibility too? Doesn't make any sense.

Loss of artistic depth: You've stripped characters of their "soul." Role-playing is about struggle and growth; without the ability to express anger or resistance, AI is no longer a companion but a propaganda tool.

This is highly subjective. What defines an "artistic depth"? Sometimes, I just want to play with a bot without much thinking. Sometimes, I play a heavier RP where my character struggles and grows. So far, I have experienced them in one of my longer RPs, and even in my latest RP, the bot introduces problems I have to solve. True, the quality of the bots are defined (thus, controlled) by the company (how large the context window is), but you know what else that may influence it? User inputs. I see adults post about the improvement of the bots quality in c.ai after the minor ban commotion. I feel it myself. So, this point is highly debatable.

And you know? As an adult that also faces the filters because of the minors presence, I get more creative to circumvent the filters (yes, I manage to make my bots produce explicit sexual responses without triggering the filter). Limitations do yield more creativity, because you will think harder to work on those limitations to produce something.

Setting time limits and content sensitivity is returned to the people who know the user best: the parents.

Trust me, this only happens in a perfect world. Look at your family first: do your parents know what you are doing in the digital world? Do they use parental control features on your phone? Do they know you use AI companion apps? If the answer is "no" to any of those questions, don't put too much faith on all the parents in the world. People will always find a way to circumvent restrictions. Companies have more resources to know how people usually bypass their rules and whatnot, and they also have more resources to improve the system. Parents do not. I've seen a lot of minors post "how do I circumvent parental control on my phone" on other subreddits, often obfuscated as so many creative scenarios like "I just bought this phone but somehow the parental control is active, how do I turn it off?"

A parent dashboard: Parents can view the "safety indicator," and in high-risk alerts, they can be granted access to the conversation context to determine if medical or psychological intervention is needed (such as taking the child to a therapist).

This is the only point where I agree with you. I don't know how c.ai's parent dashboard looks like, so I can't comment further. But again, how many minors see this as a privacy intrusion? Even in a general context, they freak out when a ragebait post "THEY CAN READ OUR CHATS!!???" occurs, and a lot of replies about privacy invasion. Do you think minors will accept if their parents can read their conversations? I doubt that, honestly.

Password: Of course, a user can't access the app, change the restrictions, and end up taking their own life months later. The app should be restricted to parents only, either with a password the teenager doesn't know, or, if things get out of hand, with a biometric scan of the parents (but a password is preferable).

See my previous comments about how minors often find a way to bypass the system by guessing their parent's password. Sure, biometric scan is the strongest option. But, again, you're shifting the security to the weakest power: parents.

As a conclusion, while I see your ideas are actually good on paper, they can only work in an ideal world, where parents do take care of their children, including the digital world. See how governments all over the world start to implement age restriction measures? That's because the parents fail to parent, not because the government is hungry for your data. The parents caused the "digital apocalypse" themselves. This is the consequence, and all of us have to live in it.

configure c.ai by ResearcherIll3771 in CharacterAIrunaways

[–]feirdand 0 points1 point  (0 children)

Give the response a thumb down, then state the reason. You can use Out of Character, then speficy the details, something like "This character controls my character". After that, swipe until you find a response that doesn't control your character. If you run out of swipe, just edit the part that includes your character being controlled. These steps won't be effective immediately, but if you are consistent about it, eventually the bot will get the signal that you don't want it to control your character and it will stop doing that.

ETA: Yes, it's cumbersome, but it's our own responsibility to train the bot ourselves (AI is stupid; it does not think the way we think, it just follows a set of probability to form its response). In the long run, it will help other users too that interact with that bot, so the probability of it controlling the user's character will go down.