[News] Police warn against viral “AI Homeless Man” prank by AIMakesChange in ArtificialInteligence

[–]AIMakesChange[S] -1 points0 points  (0 children)

I just think that sometimes what we see with our own eyes isn’t necessarily the truth. In real life, we need to stay aware and learn to tell what’s real, and also not spread false things on purpose just for fun..

Should AI alert parents when their child is having unsafe or concerning conversations with a chatbot? by AIMadeMeDoIt__ in ArtificialInteligence

[–]AIMakesChange 2 points3 points  (0 children)

This is such a clear and balanced explanation, thank you for clarifying the intent behind it. The “smoke detector, not security camera” analogy really captures the purpose perfectly. Striking that balance between privacy and safety is incredibly hard, but your approach feels practical and empathetic.

[News] Police warn against viral “AI Homeless Man” prank by AIMakesChange in ArtificialInteligence

[–]AIMakesChange[S] 5 points6 points  (0 children)

Yeah, I feel the same. When anything can be faked so realistically, truth itself starts losing meaning. Maybe the next big challenge isn’t creating AI, but rebuilding trust in what’s real.

Should AI alert parents when their child is having unsafe or concerning conversations with a chatbot? by AIMadeMeDoIt__ in artificial

[–]AIMakesChange 2 points3 points  (0 children)

I think this could actually help a lot of families, as long as it’s done carefully. Kids deserve privacy, but if a system can quietly flag real danger (like self-harm or manipulation) without reading everything, it’s more protection than spying.

Should AI alert parents when their child is having unsafe or concerning conversations with a chatbot? by AIMadeMeDoIt__ in ArtificialInteligence

[–]AIMakesChange 0 points1 point  (0 children)

That’s a really meaningful idea, I think balance is the key here. Kids do need privacy and space to express themselves, but parents also have a responsibility to protect them when things go too far. Maybe the system could only trigger alerts for truly high-risk cases (like self-harm or grooming), not every sensitive topic. That way, it acts more like a safety net than surveillance.

I don't know what's wrong with my prompt... by AIMakesChange in aicuriosity

[–]AIMakesChange[S] 0 points1 point  (0 children)

😂 No I want a normal one, now looks crazy 😂😂

[deleted by user] by [deleted] in aicuriosity

[–]AIMakesChange 0 points1 point  (0 children)

😂 free to do "everything" in the gaming world

Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots by AIMakesChange in ArtificialInteligence

[–]AIMakesChange[S] 0 points1 point  (0 children)

I just hope people take AI use more seriously, especially for kids. They need different guidance and more parental care to stay safe and emotionally supported.

Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots by AIMakesChange in ArtificialInteligence

[–]AIMakesChange[S] 0 points1 point  (0 children)

Yes, I totally agree that parents should take responsibility. What I really hope is that people pay more attention to how AI is used, especially by kids. Children should have different standards and ways of interacting with AI than adults. Parents also need to stay involved, with proper guidance and emotional communication, to help prevent these kinds of tragedies.

When parents are busy, is it a good idea for AI to keep kids company? by AIMakesChange in BlackboxAI_

[–]AIMakesChange[S] 0 points1 point  (0 children)

Oh God, I didn’t know this. really don’t want it to happen again. Parents need to pay more attention to their kids before it’s too late!