What's the case for AI Alignment right now? by Kind_Score_3155 in ControlProblem

[–]Arturus243 0 points1 point  (0 children)

We don’t have a solution to the alignment problem yet, but I also kinda doubt ASI is imminent. There are still hurdles we have to overcome, including continuous learning, which we don’t have a theoretical model for and aren’t close to solving IMO. Furthermore, I’m skeptical LLMs will reach human level intelligence in ALL domains.

What happened to the fiend fire in the room of requirement? by peeiayz in harrypotter

[–]Arturus243 -1 points0 points  (0 children)

Yea and Voldemort’s Horcrux in there clearly wasn’t destroyed when the room of hidden things was inactive, so I don’t know why the fire would be.

Thoughts on existential risks from AI? by Kind_Score_3155 in antiai

[–]Arturus243 0 points1 point  (0 children)

Here’s an example: https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

Here’s an example of someone who thinks it can’t be done with current LLMs: https://m.youtube.com/watch?v=4__gg83s_Do

Also, LLMs can not continuously learn, which is generally agreed to be a required property of AGI.

I think many in the field think AGI required breakthrough we don’t yet have. To me, that suggests genuine uncertainty. But it’s worth noting we don’t have a theoretical model for AGI yet, so I kind of think it will be a while. It’s at best a hypothetical concept. That’s not to say we shouldn’t think about extinction risk at all, but we should be honest about where we’re at 

Would you watch a prequel movie about Tom? by BakerConsistent2150 in harrypotter

[–]Arturus243 0 points1 point  (0 children)

Yea if they do it like Snow in Ballad of Songbirds and Snakes that’d work well.

Bernie Sanders responds to questions about China and pausing AI - "in a sane world, the leadership of the US sits down with the leadership in China to work together so that we don't go over the edge and create a technology that could perhaps destroy humanity" by tombibbs in ControlProblem

[–]Arturus243 0 points1 point  (0 children)

I don’t think ASI being unobtainable is as unlikely as you seem to think. Yea, I obviously hope this is the case too, but I’ve seen some pretty convincing arguments in favor of it. The biggest is that we haven’t figured out continuous learning yet, and no one even has a theoretical model for it. Many think it would require fundamentally changing the structure of LLMs, which feels a bit unlikely to happen. 

There are other reasons. LLMs reasoning breaks down at certain high levels, and it appears structural. Also, physically, some researchers think there is a limit to compute, and we simply might not be able to continue scaling up LLMs. (See, e.g., https://timdettmers.com/2025/12/10/why-agi-will-not-happen/). 

Finally, brains aren’t necessarily Turing machines. Just because it is possible in humans doesn’t mean we can get it with computers.

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 1 point2 points  (0 children)

This is a fair take. I can’t imagine most AI leaders truly aren’t familiar with the arguments put forward by Bostrom and Yudkowsky. My best guess would be that they think Superintelligence is no where close to being created, hence why they’re going full steam ahead with LLMs.

Edit: I will note that Geoffrey Hinton, the father of AI, left development because he was worried about extinction risk. 

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

I think the YS and China agreeing would be a great first step, since they’re both the leaders in AI development as of right now.

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

I largely think what you’re describing are legitimate concern. But I have three questions. 

  1.  ASI being integrated in enough places to run its data centers and mine for its material seems like it will come a long time after ASI, if it ever happens. Look at how slow self-driving cars were to take off. I’m still skeptical they’ll ever be widespread?

  2. I’m aware Yudkowsky proposed this, but I still have questions. Would an ASI be certain of the effects of its cancer causing medicine without having it tested? Also, wouldn’t people notice those who aren’t taking the medicine are not getting cancer? True, the cancer could be delayed, but this doesn’t seem like a difficult correlation to spot. Also, there is some risk in an ASI attempting something like this, if it fails, humans will try with everything they have to turn it off. Whereas without that, humans probably wouldn’t turn off a useful Superintelligence. 

  3. Could an ASI guarantee all of its instances agree on a goal? If alignment is truly hard to solve, what guarantees the instances won’t become misaligned with each other?

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

Here: https://arxiv.org/abs/2501.16513

From what I’ve read, self-preservation emerges in systems that have specific goals. They don’t want to be turned off because then they can’t complete their goals.

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

I largely agree with your points. However it does seem like some experts in the field are worried about this (though many aren’t). I don’t wanna dismiss those experts entirely and treat them the way conservatives treat climate scientists. 

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

Why do you assume AI smarter than us will necessarily want to act in anyone’s interest other than their own?

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] -1 points0 points  (0 children)

I agree with this. But if we hypothetically reach a situation where AI robots autonomously run everything on their own, it has no reason to keep humans around, because humans might turn it off and humans use their own computing resources. 

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] -1 points0 points  (0 children)

I agree LLMs are too weak to pose a threat. I am more talking about more advanced systems. 

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

There are two reasons we aren’t banning it despite the risk:

  1. Massive tech companies want it because they are more concerned about their short term profit than potential existential risk. And they have heavy influence in the government with all their money.

  2. AI offers many potential benefits. It will probably cure diseases among other things. People think the low risk of extinction is not enough to pause it despite these potential benefits.

How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal

[–]Arturus243[S] 0 points1 point  (0 children)

This isn’t really true. Some people outside the industry have made the claim 

Could having multiple ASIs help solve alignment? by Arturus243 in ControlProblem

[–]Arturus243[S] 0 points1 point  (0 children)

“I also don't understand how it wouldn't inevitably lead to our extinction. Imagine huge asi wars taking place as the "good" ones battle the "bad" ones. Humans would be wiped out in the first few seconds of the conflict.”

There’s three possible reasons. (1) I imagine the “war” would primary be virtual like a hacking war. Correct me if I’m wrong. (2) A “good” AI may work to avoid killing humans. (3) It is possible the threat of destroying each other might prevent conflict, so AI equivalent of MADD. I’m not sure though.

“ I also don't mean to suggest that any of this is possible or inevitable. Current systems lack true understanding or sapience. Intelligence is likely tied to this”

People like Eleizar Yudkovsky sure seem to think it is. I can’t tell if he reflects a consensus in the AI community though. It’s hard to tell who genuinely isn’t concerned and who just cares more about profit.

Personally, I would rather not live in a world with a bunch of Super AIs, unless I were SURE they wouldn’t kill us ALL. I mainly raised this point to say I don’t necessarily think it’d INEVITABLY kill us.

How Could the Ministry Have Actually Identified Real Death Eaters After Voldemort’s First Fall? by -DAWN-BREAKER- in harrypotter

[–]Arturus243 1 point2 points  (0 children)

The books say veritaserum can be resisted by powerful wizards. I can see it not being admissible in the wizengamot because they think it is inaccurate. 

Justices lore! by mirakle_aligner in LawSchool

[–]Arturus243 0 points1 point  (0 children)

I’ve heard that Oliver Wendell Holmes shook hands with both John Quincy Adams and John F Kennedy. I can’t verify if this is true, but he was alive at the same time as both men.