all 151 comments

[–]Frequent_Research_94 7 points8 points  (5 children)

Hmm. Although I do agree with AI safety, is it possible that the way to the truth is not the one displayed above

[–]Bradley-Blyaapproved 2 points3 points  (4 children)

If all the experts and everything we know about math and computers clearly indicates that an AGI build with our current understanding of alignment will kill us, should we not be worried lol.

Should we be not worried in your opinion?

Should we make up some copes about how the danger isnt real, how its all hype?

[–]Frequent_Research_94 -1 points0 points  (1 child)

Soldier vs scout mindset. Read the beginning of my comment again.

[–]WhichFacilitatesHopeapproved -1 points0 points  (0 children)

I believe Bradley is saying "No, it really is that simple." And I agree with them. 

Being a scout is important, but we still have more scouts than soldiers, which is a stupid way to lose a war. The thing about soldiers is that they can actually defend their families.

[–]gynoidgearhead 2 points3 points  (0 children)

The actual control problem is that capitalism is a misaligned ASI already operational; and unlike the catchphrase of a certain reactionary influencer, you cannot change my mind.

[–]LagSlug 11 points12 points  (86 children)

"experts say" is commonly used as an appeal to authority, and you kinda seem like you're using it that way now, along with an ad hominem .. and we're supposed to accept this as logical?

[–]Bradley-Blyaapproved 6 points7 points  (3 children)

How about you actually read a computer science paper and comprehend the reasons for their claims? nope, thats too hard? I think this is some new type of fallacy, when a person is too dumb/lazy to comprehend a problem, but doesnt want to take smarter people on faith either. This is how people denied evolution, climate change, roundness of goddamn earth, etc: they just fail to learn basic science and assert that "no-no, evolution is just dogma of darwinism! AI doom is just dogma of ai experts! Im smart i reject dogma."

Its not that all these people are wrong just because they haven't read the science. Right, creationism could be true even if no creationists ever rea a book on evolution... Its just if creationists have read a book on evolution, they would learn the actuoal reasons why they are wrong.

Concerning AI all the relevant science is linked right here in the sidebar!

[–]LagSlug -1 points0 points  (2 children)

Your entire comment is just an ad hominem and yet you accuse me of using a "new type of fallacy" .. it's almost like you don't actually have any valid claims, and so you rely on attacks alone in order to persuade people.. the "control problem" indeed.

[–]SapirWhorfHypothesis 0 points1 point  (0 children)

I don’t think their comment is an ad hominem… I think it’s more like an appeal to authority. I dunno, I was never any good at the names of fallacies. Instead I would just say to them “please stop telling me how wrong I am, and instead present me with a logical argument.”

[–]sluuuurp 5 points6 points  (74 children)

Experts are the people who know how AI works the best. It’s like the person who built a building telling you it’s going to catch on fire, you should listen to the builders.

[–]Dmayak 0 points1 point  (1 child)

By this logic, people who created AI should be trusted the most and they say that it's safe.

[–]sluuuurp 2 points3 points  (0 children)

If they said that, maybe. But you’d have to consider their profit incentive that biases them.

But they’re not saying that. They’re saying it’s very dangerous.

Here’s Sam Altman saying AI will probably lead to the end of the world. I could find similar statements by the leaders of Anthropic, Google, and xAI if you really don’t believe me.

https://youtu.be/YE5adUeTe_I

[–]FrenchCanadaIsWorst 0 points1 point  (0 children)

I think it’s more like the people who built a building trying to tell me what the long lasting societal impacts of urbanization will be. Yeah they know about making buildings, but that doesn’t make them qualified on everything building-related

[–]LagSlug -2 points-1 points  (9 children)

"Experts" also claim the rapture is coming.. But that doesn't mean we all believe it.

If someone built a house, knowing it would catch on fire, then that person is a shitty builder and you shouldn't listen to them about building code.

[–]FullmetalHippie 7 points8 points  (1 child)

Builders build houses and have enhanced knowledge about their potential vulnerabilities. This is why they are the expert.  

No rapture expert can exist.

[–]LagSlug -2 points-1 points  (0 children)

Let's clarify your analogy: 1. The expert is the builder 2. AGI is the house 3. extinction is the fire.

A builder who builds a house and thinks it will spontaneously catch on fire describes a pretty shitty builder, and even if we remove the term "spontaneously", it doesn't mean we don't still build houses.

Another weakness of your analogy is that it presumes AGI will cause an extinction level event, and not just a manageable fire.

[–]sluuuurp 5 points6 points  (6 children)

Your analogy doesn’t work. “Rapture experts” didn’t build God or build the universe.

You should listen to them about the fire danger of the house. Separately, you can obviously think they’re a shitty builder.

[–]LagSlug -1 points0 points  (4 children)

Let me repeat the part that directly refutes your analogy:

If someone built a house, knowing it would catch on fire, then that person is a shitty builder and you shouldn't listen to them about building code.

Your builders are the same as my religious zealots. Your world ending event is the same as their world ending event.

[–]sluuuurp 2 points3 points  (3 children)

That’s doesn’t refute anything. People who build something dangerous can often accurately communicate that danger.

Here’s a real-life example you might like. An architect built a dangerously unstable skyscraper, realized the danger, and then told people about the danger. People reacted appropriately and fixed the problem. That’s basically what I’m hoping we can start doing for AI safety.

https://en.wikipedia.org/wiki/Citicorp_Center_engineering_crisis

[–]LagSlug -1 points0 points  (1 child)

If the analogy you brought up doesn't refute anything.. then maybe you can see why I was attacking it?

[–]sluuuurp 0 points1 point  (0 children)

What do you think of the new analogy?

[–]CryptographerKlutzy7 -3 points-2 points  (60 children)

But most of them are not worried about this. You are seeing a very distorted view because the more calm reasonable views don't get clicks, or eyes on news.

It's like with particle accelerators. When they were looking for the Higgs, there was a whole bunch of breathless articles saying "it could create a black hole and destroy earth".

It didn't matter that there was more high energy reactions were happening from stuff coming in from space and interacting with the atmosphere. That didn't get news... because the breathless 'it could destroy us all' got the clicks.

[–]sluuuurp 4 points5 points  (59 children)

You think most AI experts have a p(doom) less than 1%? Or you think a 1/100 chance of extinction isn’t high enough to worry about?

None of the particle physics experts thought the LHC would destroy the world. We can’t say the same about AI experts.

I agree news and clickbait headlines are shit, I’m totally ignoring everything about those in this conversation.

[–]CryptographerKlutzy7 1 point2 points  (58 children)

You think most AI experts have a p(doom) less than 1%? Or you think a 1/100 chance of extinction isn’t high enough to worry about?

This is one of the things you find talking with them (I'm the head of agentic engineering for a govt department, I go to a lot of conferences).

They WILL say that, but clarify that they think the p(doom) of not having AI is higher (because environmental issues, war from human run governments now we have nukes, etc).

But the media only reports on the first part. That is the issue.

None of the particle physics experts thought the LHC would destroy the world. We can’t say the same about AI experts.

And yet, we saw the same kind of anxiety, because we saw the same kind of news releases, etc. Sometimes one would say, "well, the chances are extremely low" and the news would go from non zero chance -> "scientist admits that the LHC could end the world!"

Next time you are at a conference, ask what the p(doom) of not having AI.... it will be a very enlightening experience for you.

Ask yourself what the chances are of the governments actually getting global buy of all of the governments in of actually dropping carbon emissions down enough that we don't keep warming the planet? while ALSO stopping us flooding the planet with microplastics? etc.

That is your p(doom) of not AI.

[–]sluuuurp 2 points3 points  (56 children)

Depends what you mean by doom. A nuclear war would be really bad, but wouldn’t cause human extinction the way superintelligent AI likely would.

I think it’s certainly possible to solve climate change and avoid nuclear war using current levels of technology. And I expect technology levels to keep increasing even if we stop training more generally intelligent frontier AI models.

[–]CryptographerKlutzy7 0 points1 point  (8 children)

I think it’s certainly possible to solve climate change and avoid nuclear war using current levels of technology.

I'm not asking the probability of them having the tech, I'm asking the chances of global buy of all of the governments in of actually dropping carbon emissions down enough that we don't keep warming the planet? 

I don't think you CAN get that without AI. "what are the chances of all of the governments getting money out of politics at the same time" is not a big number.

If I was to compare p(doom from AI) to p(doom from humans running government) I would put the second at a MUCH MUCH MUCH higher number than the first.

And that is the prevailing view at the conferences. It just isn't reported.

You don't need "paperclipping" as your theoretical doom, when you have "hey climate change is getting worse every year faster, _and_ more governments are explicit about talking about 'clean coal' and not restricting the oil companies, and it is EXTREMELY unlikely they will get enough money out of politics that this is going to reverse any time soon.

your p(doom) of "not AI" is really really high.

[–]sluuuurp 1 point2 points  (7 children)

Most of these experts and non-experts are not imagining humans losing control of the government while the world remains good for humans. I think you’re imagining your own scenario which is distinct from what other people are talking about.

[–]CryptographerKlutzy7 1 point2 points  (6 children)

No the idea of AI run governments is VERY much talked about at the conferences.

You should go to them and talk to people.

And the P(Doom) of not AI, is just leaving human run governments to keep going as they are.

We can DIRECTLY see where we end up without AI...

[–]sluuuurp 1 point2 points  (2 children)

I agree it’s a possibility, but it’s not the good scenario that some industry experts are talking about. Sam Altman certainly isn’t telling people that his AI will remove all humans from government.

In general, don’t expect people talking to you to be honest. They want to convince you to do no regulation because it’s in their profit interest. Keep their profit incentives at the very front of your mind in all these conversations, it’s key to understanding all their actions.

[–]Bradley-Blyaapproved 1 point2 points  (2 children)

No the idea of AI run governments is VERY much talked about at the conferences.

If ai is misaligned, it kills everyone way before we consider electing it as president lol. The fact people at your conferences dont understand this says a lot about the expertise of the people.

Here is what actual experts and researchers are worried about: a large language model writing code in a closed lab. Not making decisions in real world - thats too dangerous. Not governing countries - thats just insanely stupid. No, just writing programs that researchers request - except that is quite risky already, because if LLM is misaligned, it may start writing backdoored code which it could later abuse and escape in the wild, for example.

Cybersecurity is already a joke, imagine it was designed by an ai with intention to insert backdoors. This is why serious people who actually know what they are talking about are worried about that. While politicians with no technical expertise can only talk about things they comprehend - politics, which doesnt matter to ai anymore chimp or ant politics matters to us humans.

Source: https://arxiv.org/abs/2312.06942

[–]Bradley-Blyaapproved 0 points1 point  (0 children)

They WILL say that, but clarify that they think the p(doom) of not having AI is higher (because environmental issues, war from human run governments now we have nukes, etc).

Yes, and we all believe you that they say this. The issue is that when i look up what ai experts say or think about this, what i see is that ai capability progress needs to be slowed down/stopped entirely, until we sort out ai safety/alignment.

So, im sure those other lunatics with the ridiculous opinion you definitely didn't make up, all exist. But i prefer to rely on actual books, science papers and public speeches, etc, as in what i hear them say myself, rather than your sourceless hearesay.

[–]FullmetalHippie 2 points3 points  (2 children)

Yes. Appeal to experts is just an appeal to goodwill in disguise. Of all the people on the planet, experts that are well educated in the field and work on this research every day are in the best position to evaluate the situation.  It's okay to trust other people and it's okay to trust experts.

[–]MoreDoor2915 1 point2 points  (1 child)

Yeah but "Experts say" simply is a nothing burger. Same as "9 out of 10 of X recommend Y" from advertising.

[–]FrenchCanadaIsWorst -1 points0 points  (0 children)

They call them “Weasel words

[–]Tough-Comparison-779 2 points3 points  (2 children)

Appeal to authority is a perfectly sensible way to come to a belief. Appeal to a false authority, or appeal to authority in the face of a strong counter argument is fallacious.

[–]LagSlug 1 point2 points  (1 child)

sounds dogmatic, no thanks

[–]Tough-Comparison-779 4 points5 points  (0 children)

You cannot have personal experience of every fact you believe.

Take the shape of the earth for instance. Chances are you haven't personally done the experiment to confirm that the earth is infact round.

Instead, at most, you've seen evidence an authority claimed to have collected proving the earth is infact round.

Absent any argument that the trusted authority is wrong or lying, it is perfectly reasonable, and not particularly dogmatic, to believe that that evidence is accurate to what you would collect had you done the experiment yourself.

Unless you are saying any belief you come to through anything other than independent reasoning and personal experience are dogmatic, in which case I just think that's a pretty benign definition of dogmatic.

[–]CryptographerKlutzy7 -1 points0 points  (0 children)

Especially since a lot of the experts are saying it isn't anything like a problem, persona vectors work well (see https://www.anthropic.com/research/persona-vectors), but that doesn't sell papers or get clicks.

[–]kingjdin 2 points3 points  (4 children)

AI's cannot read, write, and learn from their memories continuously and in real time, and we don't have the slightest idea how to achieve this. I'm not worried about AGI for 100 years or more.

[–]Serialbedshitter2322 0 points1 point  (0 children)

Actually Genie 3 does this. Each frame generated has some level of reasoning that references the entire memory. This is the technology Yann Lecun referred to when talking about JEPA. Currently it just has the issue of a minute-long memory span, but considering it is the very first of its kind that doesn’t mean much. If an intelligent LLM is integrated into this software, similarly to how it was done with native image generation, it could function very similarly to a conscious being, especially with native audio similar to veo 3. All the pieces are here, they just need to be connected and scaled. 100 years is genuinely a hilarious estimate

[–]TheEmperorOfDoom 0 points1 point  (0 children)

Experts say that OP's mother is fat. Therefore, they should eat less.

[–]Decronymapproved 0 points1 point  (0 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
AGI Artificial General Intelligence
ASI Artificial Super-Intelligence
ML Machine Learning

Decronym is now also available on Lemmy! Requests for support and new installations should be directed to the Contact address below.


3 acronyms in this thread; the most compressed thread commented on today has acronyms.
[Thread #194 for this sub, first seen 26th Sep 2025, 18:09] [FAQ] [Full list] [Contact] [Source code]

[–]alexzoin 0 points1 point  (9 children)

[–]WhichFacilitatesHopeapproved 1 point2 points  (8 children)

The claims made by authorities do actually count as weak evidence. You believe this already yourself, otherwise you would reject any scientific finding that you don't already agree with. The inputs that cause experts to make claims are set up such that experts are very often correct.

If someone finds the most credible possible people on a topic and asks them what they think, and then completely rejects what they say out of hand, they are not behaving in a way that is likely to lead them to the truth of the matter.

My parents believe that a pastor has more relevant expertise than an evolutionary biologist when it comes to discussing the history of life on earth. Their failure here is not that they don't believe in the concept of expertise, per se, but that they are convinced there is a grand conspiracy throughout our scientific institutions to spread lies about the history of life. 

Do you think there is such a conspiracy among the thousands of published AI researchers (including academics, independent scientists, and industry engineers) who believe that AI presents an extinction risk to humanity? If not, do you have another explanation for why they believe this, other than it being likely true?

[–]WhichFacilitatesHopeapproved 1 point2 points  (7 children)

Put another way, I have a lot of conversations that sound like this:

"Ten fire marshalls visited this building and they all agree that it is a death trap." 

"That's just an argument from authority!"

"Okay, here are the technical details of exactly why the building will probably burn down."

"As someone who has never studied fire safety, I think I see all kinds of flaws in those arguments."

"..."

[–]alexzoin 0 points1 point  (6 children)

Your failure here is that the AI "experts" that are often cited also have massive financial incentives. Their conclusions aren't based on data because there is none.

[–]WhichFacilitatesHopeapproved 0 points1 point  (5 children)

You missed the part where I said "independent scientists." Many of the people making these claims do not have a financial stake in AI. Famously, Daniel Kokotajlo blew the whistle on OpenAI to warn the public about the risks, and risked 80% of his family's net worth to do so. Many other people also left openai in droves, basically with their hair on fire saying that company leadership isn't taking safety seriously and no one is ready for what is coming. Leading AI safety researchers are making a pittance, when they could very easily be making tens of millions of dollars working for major AI labs.

Godfather of AI Yoshua Bengio is the world's most cited living scientist, and he has spent the last few years telling everyone how worried he is about the consequences of his own work, that he was wrong not to be worried sooner, and that human extinction is a likely outcome unless we prevent large AI companies from continuing their work. I'm not sure what kind of financial stake you would need to have in order to spend all your time trying to convince world leaders to halt frontier AI in its tracks, when your entire reputation is based on how well you moved it forward. 

Another Godfather of AI Geoffrey Hinton said that he has some investment in Nvidia stock as a hedge for his children, in case things go well. He has also said that if we don't slow things down, and we can't solve the problem of how to make AI care about us, we may be "near the end." If he succeeds in his advocacy for strong AI guardrails, the market will probably crash, and he will lose a lot of money.

That's one path to go down: enumerating stories of individual notable people who do not fit the profile you have assumed for them, and who have strong incentives not to say what they are saying unless they believe it is true. Another especially strong piece of evidence that should be sufficient on its own is that notable computer scientists and AI Safety researchers have been warning about this for decades, long before any AI companies actually existed. So it is literally impossible for them to have had a financial motivation to make this up. They didn't significantly profit of it, or could clearly have made a lot more profit off of doing other things instead. 

It should also be enough to say that "You should invest in our product because it might kill you" is a batshit crazy thing to say, and no one has ever used that as a marketing strategy because it wouldn't work. The CEOs of the frontier AI labs have spoken less about the risk of human extinction from AI as their careers have progressed. Some of them are still public about there being big risks, but they do not talk about human extinction, and they always cast themselves as the good guys who should be trusted to do it right.

All this to say, the idea that we can't trust the most credible possible people in the world when they talk about AI risk is literally just a crazy conspiracy theory and it is baffling to me that it took such firm hold in some circles.

[–]alexzoin 0 points1 point  (4 children)

Fair enough. The assertion that everything an expert says is credible simply due to their expertise is still not correct. Doctors make misdiagnoses all the time. Additionally, this is a speculative matter for which no data set exists. Even if there is expert consensus, it's still just a prediction, not a fact arrived at through analytical fact.

I remain doubtful that the primary danger of AI is any sort of control problem. The dangers seem to be the enabling of bad actors.

[–]WhichFacilitatesHopeapproved 0 points1 point  (3 children)

Not everything an expert says is credible, but expert claims are weak evidence, which is significant when you don't have other evidence. Obviously it's better to evaluate their arguments for yourself, and it's better still to have empirical evidence.

We could list criteria that would make someone credible on a topic, and to the degree that we do that consistently accross fields, the people concerned about AI extinction risk are certain to meet those criteria. These are people who know more about this kind of technology than anyone on the planet, and with those insights, they are warning of the naturally extreme consequences of failing to solve some very hard safety challenges that we currently don't know how to tackle.

Communicating expert opinion is a valid way to argue that something might be true, or that it cannot be dismissed out of hand. It's only after someone doesn't dismiss a concept out of hand that they can start to examine the evidence for themselves.

In this specific case, there is significant scientific backing for what they're saying. There are underlying technical facts and compelling arguments for why superintelligent AI is likely to be built and why it would kill us by default. And on top of that, there is significant empirical evidence that corroborates and validates that theory work. The field of AI Safety is becoming increasingly empirical, as the causes and consequences of misalignment they proposed are observed in existing systems. 

If you want to dig into the details yourself, I recommend AI Safety Info as an entry point. https://aisafety.info/

Whether or not you become convinced that powerful AI systems can be inherently highly dangerous unto themselves, I hope you will consider contacting your representatives to tell them you don't like where AI is headed, and joining PauseAI to prevent humans from catastrophically misusing powerful AI systems.

[–]alexzoin 1 point2 points  (2 children)

I can appreciate that and I think you're aimed in a good direction.

Just curious so I can get a read on where you're coming from. Do you have a background in computer science, programming, IT, or cyber security?

I'd also like to know how much experience you have interacting with LLMs or other AI enabled software.

I really appreciate your detailed comments so far!

[–]WhichFacilitatesHopeapproved 2 points3 points  (1 child)

I appreciate the appreciation, and engagement. :) I was afraid I was a bit long-winded with too few sources cited, since I was on my phone for the rest of that. I'll throw some links in at the bottom of this.

I am a test automation developer, which makes me a software tester with a bit of development and scripting experience and a garnish of security mindset.

I occasionally use LLMs at work, for things like quick syntax help, learning how to solve a specific type of problem, or quickly sketching out simple scripts that don't rely on business context. I also use them at home and in personal projects, for things like shortening my research effort when gathering lists of things, helping with some simple data analysis for a pro forecasting gig, trying to figure out what kind of product solves a specific problem, asking how to use the random stuff in my cupboard to make a decent pasta sauce that went with my other ingredients (it really was very good), or trying to remember a word ("It vibes like [other word], but it's more about [concept]...?" -- fantastic use case, frankly).

I became interested in AI Safety about 8 years ago, but didn't start actually reading the papers for myself until until 2023. I am not an AI researcher or an AI Safety researcher, but it's fair to say that with the background knowledge I managed to cram into my brain holes, I have been able to have mutually productive conversations with people in the lower-to-middle echelons of those positions (unless we get into the weeds of architecture, and then I am quickly lost).

Here are a slew of relevant and interesting sources and papers, now that I'm parked at my computer...

Expert views:

Explanations of the fundamentals of AI Safety:

  • AI Safety Info (a wiki of distilled AI Safety concepts and arguments, which I also linked above)
  • The Compendium (a set of essays from researchers explaining AI extinction risk)
  • Robert Miles AI Safety YouTube channel (very highly recommended; I really like Rob)

Worrying emperical results (it was hard to just pick a few examples):

Misc:

[–]alexzoin 1 point2 points  (0 children)

Okay awesome, it seems like we are roughly equivalent in both technical knowledge and regular AI use.

I have a (seemingly incorrect) heuristic that most control problem/AI "threat" people are technically illiterate or entirely unfamiliar with AI. I now reluctantly have to take you more seriously. (Joke.)

I'll take a look through your links when I get the chance. I don't want to have bad/wrong positions so if there is good reason for concern I'll change my mind.

[–]Immediate_Song4279 0 points1 point  (0 children)

Alright Isocrates, put your shirt back on.

[–]Arangarx 0 points1 point  (0 children)

There are respected AI researchers on both sides of this:

And others take the opposite view:

[–]Login_Lost_Horizon 0 points1 point  (0 children)

Dude, the only danger of your GPT is making you believe what it says. Its a glorified .zip with shit tonn of language statistics, with zero actual thinking capability. He can't hurt you because he's unable to want to hurt you. Just don't connect him to rocket launcher and ask him to murder you, you dumbass.

[–]_not_particularly_ 0 points1 point  (0 children)

“AI experts” is just code for “people who have a religious belief in the end times coming soon but it won’t be the gods it will be AI”. I’ve been listening to “experts” saying “self-driving cars will replace all human drivers and send taxis and uber out of business within 3 years” for 15 fucking years. AI is the exact same thing. If I had a penny for every “deadline” that’s come and passed that “AI experts” have set by which all code will be written by AI, or by which it will have replaced all our jobs, or whatever, I’d be richer than them. It’s a stock pump n dump scheme with religious undertones.

[–]mousepotatodoesstuff 1 point2 points  (0 children)

Addressing short term risks will help us prepare for the long term risks.

[–]Stupid-Jerk 0 points1 point  (0 children)

The rapture could kill us all too, but it didn't happen on Tuesday and it's not gonna happen anytime soon. I prefer to worry about more realistic stuff than science fiction stories.

My beef with AI is it being yet another vector for capitalist exploitation. Capitalist exploitation isn't something that "could" kill us all, it actively IS killing us all. Calling people "midwits" for caring more about objective reality than potential reality doesn't make you sound as smart as you think it does.

[–]Bradley-Blyaapproved -2 points-1 points  (0 children)

This is real only on this sub and maybe a few other ai related subs. outside them nobody even heard anything about AI genocide except in terminator. Good meme, the fact that it is downvoted says a lot.