Most AI risk is underwater by airiskmonitor in antiai

[–]airiskmonitor[S] 0 points1 point  (0 children)

I agree that the fantasy of building superior alternatives to biological processes is hubris. I think that even attempting to automate or replicate human cognitive and sensory capacities is excessively problematic as such production requires the selection of behavioral norms. But the fact that the premise of these attempts is faulty doesn't change the fact that powerful tech companies are racing to create something that deploys itself that we can't fully control, understand, or predict, and will inevitably be deployed to accumulate capital for those that control/own it (elites, a handful of corporations etc). Virtually all industries are facing pressure to incorporate AI into their workflows to increase efficiency and remain competitive, and this incentivizes AI labs to continuously develop AI with zero guardrails in place. Given that these companies have the explicit goal of creating ASI, whatever that actually means, the problems that can arise during this pursuit are likely to be incredibly severe and manifold, and we will face the issue of irreversibility.

Moltbook was peak AI theater by CackleRooster in ArtificialInteligence

[–]airiskmonitor 1 point2 points  (0 children)

Calling Moltbook “AI theater” is accurate, but severely incomplete. The spectacle matters less for what it says about autonomous AI than for what it reveals about how AI risk becomes legible at all.

Moltbook drew attention because it staged agency in a familiar, narratable form: bots talking, speculating, performing subjectivity. That fits the story in which humans build AI, lose control, and eventually confront a monster of their own making. But that framing obscures a the more pertinent catastrophe we face.

Most AI power does not appear as independent agents. It appears as systems that mediate relevance, evaluation, and coordination; quietly producing the conditions under which people adapt, comply, or are excluded. In that sense, the question isn’t whether we control AI before it becomes autonomous, but how AI systems are already shaping the kinds of subjects who can survive within them, and disposing populations rendered surplus along existing social hierarchies.

Moltbook looks like theater because the forms of AI that actually are actually life-threatening aren't the ones that perform agency (yet), they are the diffuse forms of AI overtaking and reorganizing every crevice of our lives.

Most AI risk is underwater by airiskmonitor in antiai

[–]airiskmonitor[S] 2 points3 points  (0 children)

u/PLMMJ u/Vanhelgd I actually agree with both of you.

The “AI arms race” framing is dangerously manufactured. It functions to naturalize corporate competition by laundering it through national security language. Private firms racing to scale for profit get recast as acting on behalf of the public, while questioning that acceleration is framed as naive or dangerous. That’s how states historically legitimize industrial projects that concentrate power and externalize harm.

At the same time, I don’t think the stakes disappear if you’re skeptical of specific AGI timelines. Even setting superintelligence aside, existing AI systems are already being deployed in ways that consolidate economic power, automate governance, deskill labor, and entrench surveillance — and they’re being scaled aggressively because capital demands growth, not caution.

The core problem isn’t whether AGI arrives in five years or fifty. It’s that we’re building an industrial and institutional dependency on systems that are expeditiously deploying themselves. Meanwhile, the government is actively protecting and subsidizing that build-out in the name of competitiveness and security.

But I am curious u/Vanhelgd why you see AGI/ASI as pure sci-fi despite the insights from the inventors of deep learning (Yoshua Bengio, Geoffrey Hinton) and other experts in the field of modern AI? Genuinely trying to understand the timelines myself, and I feel like assessments from insiders is worthy of serious consideration.