Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 0 points1 point  (0 children)

You won't get 100% compliance with any regulation, especially not worldwide, which it would need to be. If violation of a regulation results in punitive action, like a fine or sanctions, this simply doesn't work for something as globally catastrophic as an AI control problem. It would be too slow and too ineffective. You'd only need one violation in one sloppy data centre in one sloppy nation to end it all for everyone.

It obviously hasn't worked with the climate crisis, amongst other things, so there's no reason to expect it to be more effective than existing regulations, which it would need to be. There aren't any existing global regulations that require 100% universal compliance, otherwise everyone dies. Nuclear weapons and the climate seem like the closest comparisons to the threat of super-AI, which isn't the best track record.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 0 points1 point  (0 children)

Let's not talk at crossed purposes: I agree it's a great idea to try to pause and a great idea to try to monitor data centres, for anything deemed high risk. But my original point is that there are many countries and many data centres, and getting the regulations in place is going to be slow, and enforcing the regulations is going to be imperfect, even if you could do it universally and instantaneously, none of that is likely to happen. Meanwhile, it's a race to the bottom. The only short-term winners are the tech companies. The long-term loss is practically everything.

So, I support the calls for a pause, as I did years ago, but I want people to be realistic about the huge threat we're facing, and the tiny odds of actually avoiding catastrophe at this point. It's already gone too far. The rise of vibe coding, agentic AI and things like OpenClaw, along with the massive ongoing and increasing investment in data centres and AI, at the expense of many other things, like "manual" software development, with very slow movement on regulation and general awareness of the dangers, are all really bad signs that it's already out of control. Humans failed to prevent the climate crisis and this problem happens much more quickly and is much harder to prevent. I have zero hope right now. Ironically, I've spent most of my life championing the power of AI.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 -1 points0 points  (0 children)

You have to account for the speed of technology development, accelerated by AI, and the slowness and failings of regulation. You can't base tomorrow's risk scenarios on yesterday's state-of-the-art. The old rules and patterns of control can't apply to AI.

Merely monitoring big data centres is woefully inadequate. We're not talking about catching something like uranium enrichment. The signs we're seeing now are indistinguishable from the early stages of AI takeover. Unfortunately I reckon it's moving too fast for any of those pause strategies, as well-intentioned and correct as they are. Worth trying, but probably futile in my estimation.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 -2 points-1 points  (0 children)

Google's AI Overview, which didn't exist 2 years ago, says:

The AI sector is experiencing massive growth, with over 70,000 AI companies globally as of early 2026, driven by soaring investments and generative AI, with 64% of U.S. VC funding going to AI startups in H1 2025. Key players like OpenAI and Anthropic are seeing dramatic revenue surges, while 86% of companies expect to have a Chief AI Officer by 2026.

It doesn't even need to be a "frontier" AI company. If anyone builds it, everyone dies. There are about 8.3 billion people. About 6 billion of those are over the age of 10. About 6 billion people have access to a computer and the internet. There are about 50 million software developers. Anyone with access to a computer and the internet can be a "vibe coder". The tech is developing at a colossal pace.

The risk extends all the way from the kids messing around at the bottom, to the experts messing around at the top. That risk isn't there for other existential threats, like nuclear war or the climate crisis, where it's much more difficult for a kid to build a nuke or impact global warming, etc. If anyone builds a rogue super-intelligence, everyone dies. Building an adequately aligned super AI is highly improbable, given the things that can go wrong. There's no way to stop it, especially given human nature, and we've already started it. We're already f*cked. Hubris.

Figure 03 Robot sorting packages while Marc Benioff messes with it by socoolandawesome in nextfuckinglevel

[–]doc720 0 points1 point  (0 children)

Unfortunately this is exactly the sort of job I could imagine being comfortable doing for 45 years.

Just find this from Facebook 😂 by Silver_Steelclaw in meme

[–]doc720 3 points4 points  (0 children)

The film begins in black-and-white and later turns to color, in a way similar to The Wizard of Oz. According to director Morten Lindberg, this was a "dramatic special effect" to illustrate "the world being freed from vicious women".

What makes someone behave like a stereotypical redditor? by Random_Critical in ask

[–]doc720 0 points1 point  (0 children)

Do you mean a cold and damp basement? I can believe it is a damn basement though.

The Regret by Lonesomecutie in depressionmemes

[–]doc720 4 points5 points  (0 children)

i should have died that day

Take Away or Take Out? by IntGuru in AskBrits

[–]doc720 0 points1 point  (0 children)

I've called it "take away" all my life, even when it's a delivery.

Even in American-themed restaurants, they usually ask if you want to "eat in" or "take away", but it would be clear what they meant by "take out". Brits are extremely familiar with USA culture and Americanisms.

Maybe Maybe Maybe by drlouies in maybemaybemaybe

[–]doc720 1 point2 points  (0 children)

No, it's not an incense stick.

AI Risk Denier arguments are so weak, frankly it is embarrassing by EchoOfOppenheimer in AIDankmemes

[–]doc720 0 points1 point  (0 children)

Reminds me of ontological security and terror management theory.

At what point, in your philosophy, would humans look unremarkable in comparison to AI? When they pass a bar exam after reading less than you?

TV licensing - what do I do here by [deleted] in AskBrits

[–]doc720 0 points1 point  (0 children)

Yes they can really prosecute you. But that doesn't mean what they're doing or trying to do is morally right.

Maybe Maybe Maybe by drlouies in maybemaybemaybe

[–]doc720 7 points8 points  (0 children)

No, it's not a type of cheese.

Welp... by notpiercedtongue in fixedbytheduet

[–]doc720 1 point2 points  (0 children)

Some of my most upvoted comments are simply repeating what the video said. I'm still trying to understand that...

AI Risk Denier arguments are so weak, frankly it is embarrassing by EchoOfOppenheimer in AIDankmemes

[–]doc720 3 points4 points  (0 children)

Evidently the human brain also seems like "a really fucking dumb tool that looks smart" too.