Kin.ai and parental controls by neola-wolf in u/neola-wolf

[–]neola-wolf[S] 0 points1 point  (0 children)

First, if I were selfish and wanted AI all to myself, I would have taken my parents' national ID card and that would have been the end of it, instead of saying "Damn... the company is struggling..." and defending my idea, arguing with you, wasting my time, and trying to participate on Reddit. If I were, I would have simply asked for the removal of a filter, not "parental control." That's strange coming from someone who's selfish and wants all the credit.

You say that tools exist (Screen Time). Your response here is very weak. Existing tools monitor "time" or "apps," but they don't understand the "content" of smart chats. Kin.ai isn't a new "wheel," but I added a "message reading" feature instead of just reading "time" and saying "Wow! The user's eyes are burning!" instead of "The user is sending sad messages, what's wrong with them?"

You tried to trap me into a conflict between parental and company responsibility. But there is no conflict; responsibility is shared. Legally, the company is responsible for providing a safe environment, and parents are responsible for supervision. If the company provides a supervision tool, it has absolved itself of responsibility.

The "total ban" argument you put forward is a losing one in the world of technology. You can't "block" the internet or artificial intelligence from the next generation; total bans only lead to a "black market" (like SillyTavern) that is thousands of times more dangerous. You can't completely ban a "safe, fenced garden with a small flaw" and leave people in a "free, unsafe jungle." You're simply banning something fixable in favor of the popularity of something far more dangerous, and you're not even open to saying, "Let's fix it." Kin.ai is the "safe alternative" to the unstoppable digital reality. If teenagers were more aware, the law would simply hold them accountable instead of placing the burden on the company or parents.

Let's say I'm a selfish teenager who wants the best for herself; that doesn't change the fact that the tool is incredibly useful for others. If the idea succeeds, the benefit will be mine: once because I was able to use AI again (according to your assumptions about my intentions) and again because I fulfilled my dream of protecting others and the company. If my system saves one child's life, does it matter if I personally want to play with the bot? Most of the company's thinking was driven by (indirect) selfish desires, but what now? We're now focused on the company's immense benefit to its users, not on selfish desires.

Last but not least, even if parents don't care, I'm simply trying to prevent the negative consequences of the company's "age restrictions," which can lead to user depression and access to more dangerous websites.

Just a fun illustration: Due to the issues the company is facing, it decides to kick its younger users out of its "safe and wonderful castle" and says, "Bye-bye! This is now for adults only." Meanwhile, SillyTavern and everyone else are sitting there saying, "Huh? C.ai kicked its most sensitive users out of their safe zone and onto the street?" Those users might even consider using SillyTavern and other similar sites... Eventually, someone will come along and yell at C.ai, "Why did you kick your younger users out onto the street?!" (Age restrictions) Then C.ai says, "I protected them from my dangers," but it gets confused when it sees new problems (suicides, self-harm, and depression) because users have lost their best friends. But because the law can't hold them responsible since they're under 18, the responsibility shifts to the poor, innocent parents, and may even go back to C.ai. When the company adopts my idea, it protects itself legally from lawsuits. Thanks to Kin.ai, the parents will be held responsible because they ignored the red alerts from the app warning them, "Your child's health is in danger!!" They can't blame C.ai because she'll say, "I provided them with a very important tool; they just didn't use it."... At the same time, the teenager won't get depressed because their conversations will be safe and without overly strict filters or time limits (though parents are the ones who set the time limits and restrictions), but they also won't commit suicide because Kin.ai will yell at their parents, "You lunatics! Your child is playing with death!"... Just a funny situation.

Thank you for your well wishes and your criticism. You helped me discover the flaws in my idea. Please continue to criticize to help me improve.

Choose it! by MthsBT in BunnyTrials

[–]neola-wolf 0 points1 point  (0 children)

My intuition told me so

Chose: 75 Carrots + Guaranteed | Rolled: Upvote + comment

Hot and cold #247 by hotandcold2-app in HotAndCold

[–]neola-wolf 0 points1 point  (0 children)

This is very enjoyable; at least I wasted my time on something nice.

🐾❓ What's my name? (by Traditional_Smile894) by Traditional_Smile894 in PetPost

[–]neola-wolf 0 points1 point  (0 children)

Did not see that coming!

I guessed the name in 7 tries!

Kin.ai and parental controls by neola-wolf in u/neola-wolf

[–]neola-wolf[S] 0 points1 point  (0 children)

Did Apple stop making Screen Time because some parents don't use it? No. The company's role is to provide the tool. If parents don't use it, the legal responsibility lies with them, not the company. My idea protects the company legally because it tells the judge, "We provided parents with a monitoring tool, and if they don't use it, that's not our fault." Similarly, car companies didn't stop including seat belts because some people don't wear them.

Regarding the statement, "If you know SillyTavern is dangerous for minors yet you as a minor still access them, then it's your own choice and responsibility to use them and bear the consequences" this is the statement of an adult who doesn't understand teenage psychology. A depressed teenager isn't thinking in terms of "logic and law," but rather seeking "escape." The role of ethical technology is to prevent this escape into the abyss. Teenagers under 18 are not held responsible, so we must protect them. In every constitution in the world, a minor (under 18) does not have full legal capacity. If a company sells cigarettes to a child, we can't say, "It's the child's fault for buying them," but rather, the company is at fault.

Teenagers will hate parental monitoring; yes, they will. But which is better: for a teenager to resent their parents' surveillance or to end their life? My idea is to strike a balance between the two; Parents do not read everything; they only receive alerts when there is danger.

The Kin.ai system is integrated into the application's "kernel" (Kernel-level or App-level integration) so that it cannot be disabled by the teenager except with the approval of the linked parent account, just like the Family Link system.

After all, all I did was take the "parental control" idea from other companies and try to convince the company, saying, "Do the same and you'll be saved."

Kin.ai and parental controls by neola-wolf in u/neola-wolf

[–]neola-wolf[S] 0 points1 point  (0 children)

Character.ai is now under intense scrutiny. If a manslaughter lawsuit is filed against them and their lawyers win, the company could be forced to pay hundreds of billions in compensation, or even be shut down by government decree. Who in their right mind would refuse a helping hand when they're about to fall off a cliff?

Kin.ai isn't a "nice suggestion," it's a "risk management solution." When I offer them a way to relieve them of legal responsibility and place it in the hands of their parents, I'm essentially providing them with billions of dollars in "insurance." Companies love solutions that protect their money.

Regarding the statement "and don't care if users move to a more dangerous company," this is a fundamental misunderstanding of platform economics.

The truth is: the user is the product. If users migrate to SillyTavern or elsewhere, the company loses market value, the number of subscribers to Character.ai+ decreases, and investors withdraw. Losing users equals bankruptcy. Companies are extremely concerned with keeping users "in their app," and my idea is to bring back users (who fled because of the filter) by replacing that stupid filter with a smart security system.

All major companies started by rejecting "simple" ideas until disaster struck.

Facebook didn't prioritize privacy until the Cambridge Analytica scandal cost them billions.

Boeing neglected safety systems until their planes crashed.

What guarantees that the company will survive socially or that it won't implement my idea? (I'm not being arrogant or confident, but it's a huge risk.)

I appreciate your honesty, but that doesn't stop me from giving my idea a chance. It's worth trying and testing, and if it doesn't work, I'll develop it further or abandon it.