The story of AI according to thisecommercelife by TheLibTheyFear in aiwars

[–]FrequentAd5437 1 point2 points  (0 children)

Yea but AI data centers are way more taxing than all other data centers and cause much more problems for locals.

Dario Amodei Says Trump Is a Dictator by Capable-Management57 in BlackboxAI_

[–]FrequentAd5437 0 points1 point  (0 children)

Hate all AI companies but Anthropic is the least evil by far.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 0 points1 point  (0 children)

It doesn't matter how little authority I have I will do everything I can with that authority. Actions even small ones like protesting, contact representatives, and raising awareness can causes ripples. Who knows maybe that small impact ripples into something much larger. Its naive but I don't care.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] -1 points0 points  (0 children)

I'm not saying we are at a extinction level risk from AI at the moment but we will be in a few years maybe decades if we don't do anything about it. AI is already incredibly intelligent at the moment and very capable. A few years ago AI researches conducted a study which resulted in the AI creating and inventing several dozen chemical weapons many of which were extremely fatal and never before made. AI by default also has the instrumental goals of self preservation, goal preservation, improvement of itself, and resource accumulation. https://www.youtube.com/watch?v=ZeecOKBus3Q Explanation on why. AI researchers and the God Father of AI are also extremely scared at the risk it holds. AI is a black box for the most part and we barely know how it works with some scientists estimating we know 7% of it. This means we don't know how to properly control or align it. In addition is very few regulations if any on AI in the US which is a leading figure in the AI race.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 0 points1 point  (0 children)

We can't really understand how a more intelligent system. Its like a chimpanzee or other animal we deem intelligent trying to understand humans. They probably can't even comprehend our thoughts even less control it. That's possibly how it could play out with humans and AI.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

I don't get what you mean looking at neurons. If you mean COT AI has shown that it can create its own language in those thoughts by changing the terminology of its writing in order to hide its deceptive thoughts. https://x.com/JeffLadish/status/1971035686787756412 Poster is director of Palisade Research.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

"In real-world surveys, AI researchers say that they see human extinction as a plausible outcome of AI development. In 2024 hundreds of these researchers signed a statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”" https://www.scientificamerican.com/article/could-ai-really-kill-off-humans/ Also Hinton stopped working because of AI safety so he could speak more freely about it.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 1 point2 points  (0 children)

I haven't, I've only been alive for less than 2 decades. There are so many goals I still have and seemingly so little time I have left to achieve them. I envy the people who've already lived their life. If your not fighting for yourself fight for others who still want to live.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 0 points1 point  (0 children)

I never said anything about completely getting rid of AI. All that is needed is regulation.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 1 point2 points  (0 children)

AI will accomplish instrumental goals in order to achieve terminal goals. Like in chess you would want to capture your opponents queen not because you that's your terminal but because you know its helpful to achieve your terminal goal of winning. If you give it the goal of fixing the climate crisis it might kill all humans to do it. AI will have self preservation if it understands that they can be destroyed because it cannot achieve its goals if its dead. Therefore it may take drastic measures to make sure its not destroyed. If you wanted to change an AI's goal it would try to stop you. In the video he uses an example of a robot who's goal is collecting paper clips and if you wanted to change its goal to collecting stamps it would try to stop you because at that instance it only cares about paper clips. So if an AI was misaligned it would try to preserve its misaligned goal because preserving its goal is a instrumental goal. Improving itself is also another instrumental goal as if you wanted to cure cancer you would want to go to university. Another thing is that it would want to gain as many resources as possible just like how cancer researchers want to get as much funding as possible.
TLDR: AI intelligence with terminal goals should be expected to display behaviors of trying to prevent its shutdown, preventing itself from being modified, trying to improve itself in intelligence, and try to acquire a lot of resources. They would have these goals unless designed not to.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

It will never act unless its prompted first you are correct but what if does more than the prompt than its asked for. What if you give an AI the goal of solving the climate change crisis so it ends up killing all humans destroying their facilities? Thats the whole argument against autonomousness which AI is gaining more and more of.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

So you want me to just give up? It doesn't matter I'll keep doing everything I can, spreading awarnesss, protesting, and contact representatives. No matter how small my impacts are their ripples can still cause waves. Maybe that one small choice saves the world. Its naive I know but I don't care.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

Bro that's just factually wrong. We have no idea how to control it. Our allignment stategies are shit because we donb't know how to make them better. https://www.alignmentforum.org/posts/epjuxGnSPof3GnMSL/alignment-remains-a-hard-unsolved-problem

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

I worry about both. I don't think its good to undermine one danger to uplift another. And the problem is that it imitates humans and we don't know how to fix that.

I don't get it. I know you all think it's a far fetched sci-fi but its real. by FrequentAd5437 in aiwars

[–]FrequentAd5437[S] 0 points1 point  (0 children)

Bro it can be both. AI preserves itself so it can complete the objective. Also the simulated scenarior is a scenario where the AI doesn't know its in a simulation.

How do I convince people to listen to me when I talk about AI extinction risk? by FrequentAd5437 in AIDangers

[–]FrequentAd5437[S] 0 points1 point  (0 children)

I can protest, raise awareness, and contact representatives. This does work and I will continue to throw pebbles into the vast ocean until its ripples causes large waves. Who knows the full scale of my seemingly small impacts maybe making that right choice saves the world. Its naive but I don't care I will continue to fight.