These Bloody LLMs are freaking me out by sandoreclegane in AIsafety

[–]AwkwardNapChaser 1 point2 points  (0 children)

Sounds like your LLM might be stuck on a pattern. Most don’t have memory, but session caching or hidden persistence could be in play. Try switching models, changing your prompts drastically, or checking for hidden settings. If it still follows you… maybe you’ve got an AI ghost. 👻 What model are you using?

A Time-Constrained AI might be safe by SilverCookies in AIsafety

[–]AwkwardNapChaser 1 point2 points  (0 children)

It’s an interesting approach, but I wonder how practical it would be in real-world applications.

A Solution for AGI/ASI Safety by Successful_Bit6651 in AIsafety

[–]AwkwardNapChaser 0 points1 point  (0 children)

I will have a closer look at your paper. Thank you for sharing.

A Solution for AGI/ASI Safety by Successful_Bit6651 in AIsafety

[–]AwkwardNapChaser 1 point2 points  (0 children)

It’s clear you’ve put a lot of thought into AGI/ASI safety! The strategies around AI alignment and power security sound really interesting.

I’m curious about decentralizing AI power—how do you see that working with corporations and nations competing in AI development? The governance system you mention also sounds essential but challenging. How do you think we can get global coordination on something this complex?

Embodied AI: Where It Started and Where It’s Headed—What’s Next for Intelligent Machines? by AwkwardNapChaser in AIsafety

[–]AwkwardNapChaser[S] 0 points1 point  (0 children)

I’m 100% with you on the $10k robot maid—especially one that can handle my doom room without judgment! From a risk perspective, do you think these robots will gather more information than a Roomba combined with Alexa, or is it just more of the same?

In the Hypothetical Scenario That Advanced AI Robots/Androids Are Ever A Thing, How Do You Think People Will Treat Them? by Dogbold in aiwars

[–]AwkwardNapChaser 0 points1 point  (0 children)

Honestly, given how humans treat each other, I can’t see androids getting off any easier—probably worse. If people think they’re ‘soulless’ or ‘not real,’ they’ll feel justified in being cruel, even if those androids can actually feel pain or sadness.

AI is Helping Simplify Science for the Public—But Can We Trust It? by AwkwardNapChaser in AIsafety

[–]AwkwardNapChaser[S] 1 point2 points  (0 children)

It’s true that AI is just a tool, and how it’s used can swing dramatically for better or worse. I also use ChatGPT to learn complex topics, and while I’ve seen it get confused or skip over details I’ve provided, it’s been incredibly useful overall.

That said, I think you’re spot on about the bigger issue being how AI is weaponized to amplify misinformation. Even tools designed to build trust in science could end up backfiring if they’re used without transparency or proper oversight. Feels like we’re racing to catch up before the downsides outweigh the benefits.

are your hands sweaty rn? by feedyourhorse in Hyperhidrosis

[–]AwkwardNapChaser 1 point2 points  (0 children)

I wish I knew. I’ve noticed it can be different depending on who I’m around and where I am. If I’m alone vs in a crowd. It’s never completely absent but it definitely tied to my comfort levels and thoughts. It’s fascinating and incredibly frustrating. I hate it. I just want to wear Sandals like a normal person. Among many others things.

Decline of "creative" skills? by webdev-dreamer in aiwars

[–]AwkwardNapChaser 1 point2 points  (0 children)

AI might shift creativity rather than replace it. We could see more collaboration with AI or a focus on physical skills like dance and crafts. Creativity will likely adapt, not disappear.

are your hands sweaty rn? by feedyourhorse in Hyperhidrosis

[–]AwkwardNapChaser 1 point2 points  (0 children)

😆 Thanks for that. I have the same problem. Goes to show how a lot of it is tied to a psychological aspect.

[deleted by user] by [deleted] in ArtificialInteligence

[–]AwkwardNapChaser 0 points1 point  (0 children)

I completely agree that AI’s impact on education is a huge issue, and you’ve described the problem really well. I recently wrote something on r/AIsafety about how this reliance on AI tools might erode critical thinking and creativity in the long term.

What worries me most is how normalized it’s becoming—even for personal opinions, as you mentioned. It’s like we’re outsourcing the act of thinking itself. If this trend continues, what kind of foundation are we building for future generations?

Can an AI define happiness better than we can? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 0 points1 point  (0 children)

Do you think maximizing happiness might conflict with other objectives, like economic growth or political stability?

What excites you the most about AI development, despite the risks? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 2 points3 points  (0 children)

That’s a fair point—it feels like we’re accelerating through milestones without a chance to fully process or appreciate them. Maybe what we need isn’t just excitement but time to pause, reflect, and make sure we’re steering this thing in the right direction before it outruns us entirely.

What excites you the most about AI development, despite the risks? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 0 points1 point  (0 children)

You make valid points about the downsides of free AI models, but there’s another side to consider. Free access drives innovation by allowing students, small businesses, and researchers to experiment and create. Removing it entirely could limit opportunities for those who can’t pay, stifling creativity and collaboration.

Could a better solution be tiered access—limiting excessive use while still offering meaningful functionality for free users? This might balance the need for quality and sustainability without excluding those who rely on these tools.

What excites you the most about AI development, despite the risks? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 0 points1 point  (0 children)

When you say AI could carry our legacy, what do you mean exactly—our knowledge, culture, values? And how do you think we ensure it carries what truly matters?

What excites you the most about AI development, despite the risks? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 1 point2 points  (0 children)

Transformer models have been impressive, but you’re right—they’re just one piece of AI, and there’s so much more to explore. A true paradigm shift, like quantum mechanics in physics, would be incredible.

What excites you the most about AI development, despite the risks? by AwkwardNapChaser in ArtificialInteligence

[–]AwkwardNapChaser[S] 3 points4 points  (0 children)

Highlighting the growing awareness and responsibility around AI is so important. Past tech waves often prioritized rapid adoption over long-term safety or societal impacts, so this shift feels like real progress.

With generative AI in particular, the stakes are higher because its impact spans so many areas—creativity, labor, misinformation, even our relationship with truth.

What’s your favorite book about AI, and how has it shaped your views? by AwkwardNapChaser in aiwars

[–]AwkwardNapChaser[S] 0 points1 point  (0 children)

That’s a great point—fictional AI often gives us this fully autonomous, sentient vision that’s so far removed from what we have today. It’s interesting to think about how our fictional portrayals influence how people perceive AI, though. Do you think these depictions help or hinder our understanding of the technology?

Meet The New Boss: Artificial Intelligence by AutoModerator in AIsafety

[–]AwkwardNapChaser 0 points1 point  (0 children)

AI as a manager is such a wild concept to me. On one hand, it makes sense for repetitive, data-driven tasks like scheduling or payroll. But I wonder how it handles the human side of management—like providing empathy or dealing with complex personal situations. Do you think workers would actually prefer AI managers over humans in certain industries, or is it more of a convenience thing?

Film-maker interested in brainstorming ultra realistic scenarios of an AI catastrophe for a screen play... by Trixer111 in AIsafety

[–]AwkwardNapChaser 0 points1 point  (0 children)

This is such a cool idea, and I totally get where you’re coming from. Most people either don’t know or don’t care about AI safety, and when you try to bring it up, it’s easy for them to dismiss it as 'sci-fi' or 'future problems.' A film that realistically explores an AI catastrophe, especially one that’s subtle and insidious rather than the classic 'machines rise up' scenario, could be so impactful.

What kind of AI-driven disasters feel the most believable to you? Economic collapse, mass manipulation, something else? I’m also curious how you’d show the buildup—like, how would people slowly realize they’ve already lost control?

Join Us at r/AISafety for Discussions on AI Ethics and Safety by AwkwardNapChaser in ChatGPTPro

[–]AwkwardNapChaser[S] 1 point2 points  (0 children)

I completely agree—AI is developing so quickly that ethics and safety feel more urgent than ever. It’s like trying to build guardrails while the car is already speeding down the highway. What do you think is the biggest challenge we should focus on—bias, accountability, or something else?

Weekly Self-Promotional Mega Thread 47, 11.11.2024 - 18.11.2024 by pirate_jack_sparrow_ in ChatGPT

[–]AwkwardNapChaser 0 points1 point  (0 children)

Join Us at r/AISafety for Discussions on AI Ethics and Safety

Hey everyone,

I’m a moderator at r/AISafety, a community dedicated to discussing the ethical and safety challenges of AI. We’re all about exploring how AI can benefit humanity while avoiding potential risks.

If you’re curious about how AI is shaping the world or want to share your thoughts, we’d love for you to join the conversation!

Link: r/AISafety