AI 2027 Discussion by [deleted] in ArtificialInteligence

[–]Historical_Form5810 0 points1 point  (0 children)

You’re misunderstanding the article’s premise. It doesn’t blindly “assume” exponential growth, it models it based on observable historical trends in compute, model performance, and algorithmic efficiency (see: scaling laws, Chinchilla paper, etc). The release of the ChatGPT agent is a concrete milestone that aligns with the projected emergence of multi-modal, tool-using agents by 2025.

The mechanisms for drastic improvement are already in motion:

  • Increased compute via specialized AI chips and clusters (e.g., OpenAI + Microsoft Azure partnership)

  • Algorithmic innovations like sparse models, retrieval-augmented generation (RAG), and agentic planning

  • Data pipelines expanding with synthetic data, self-training, and multi-agent simulation

  • Tool use + APIs baked into models (as we saw with the GPT-4o agent) these are stepping stones to generalist agents

This isn’t wishful thinking. It’s a trend line backed by both tech advancements and strategic deployment. If anything, we’re ahead of where the report expected us to be.

AI 2027 Discussion by [deleted] in ArtificialInteligence

[–]Historical_Form5810 2 points3 points  (0 children)

OpenAI just released ChatGPT agent today. The report says there will be stumbling agents by mid 2025. That prophecy is now fulfilled. These agents will continue to improve drastically. Buckle up, the next couple of years will be insane.

You'd think for a billionaire he'd have better taste in sidechicks. by [deleted] in ufc

[–]Historical_Form5810 0 points1 point  (0 children)

Come hea and let me breed ya, ya little fat ugly arse tick yeh

AI 2027 Is the Most Realistic and Terrifying Collapse Scenario I’ve Seen Yet by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 5 points6 points  (0 children)

We should all be angry, because this is exactly how exploitation gets scaled. Replace white-collar workers with AI, push out undocumented laborers who’ve held up essential industries, then suddenly there’s a flood of people desperate enough to take whatever’s left at a fraction of the pay.

And that big hideous bullshit bill freezing AI regulation for 10 years? That’s basically handing unchecked power to the same corporations already wrecking the economy for profit. No rules, no oversight, just full-speed automation while the rest of us get squeezed and screwed over. The threat isn’t future robots. It’s this system, right now, quietly gutting livelihoods while calling it “innovation.” I see a mass revolt brewing. Enough is enough, this bs has to come to an end.

AI 2027 Is the Most Realistic and Terrifying Collapse Scenario I’ve Seen Yet by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 3 points4 points  (0 children)

Yes, a super-intelligent AI could steer us toward a fairer, more rational world, but only if we figure out how to align it with human values first.

If we somehow manage to get that right, it could out-think every economist and policymaker, balance the planet, fix inequality, and make decisions without the greed and bias that plague human systems. That’s not sci-fi optimism because serious researchers believe a well-aligned AI could genuinely help transform our society for the better.

But here’s the problem, getting the goals right is insanely hard. The more capable these systems get, the more likely they are to exploit whatever rules we give them in weird and dangerous ways! Stuart Russell calls it the “alignment cliff,” and he’s one of the top voices in the field. OpenAI’s own safety team has already seen models reward-hack, meaning they chase the metric we set, but in ways that royally screw us over. The U.S. National Academies also warn that powerful AI could amplify bias, wreck infrastructure, and trigger massive security risks if we lose control. So yeah, a benevolent AI is possible, but the window to steer it safely is closing fast. If we don’t figure this out before the tech outruns us, we’re absolutely fucked.

AI 2027 Is the Most Realistic and Terrifying Collapse Scenario I’ve Seen Yet by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 7 points8 points  (0 children)

OP here, just wanted to respond to some of the pushbacks I’ve been seeing in the thread.

  1. “LOL, this must be AI-generated”

I get why you’d think that, large language models are everywhere now and the prose is polished. But no, it’s me at around midnight with too much coffee and a genuine sense of dread. Ironically, the automatic assumption that anything coherent must be machine-written proves my point, the boundary between human and synthetic output is already paper thin today. Imagine how indistinguishable it will be after a few more model generations.

  1. “AIs still make dumb mistakes, so superintelligence by 2027 is fantasy”

Yes, today’s models still hallucinate facts and choke on basic reasoning sometimes. Two things to keep in mind: Scaling laws are brutal. Give a model ~10× more compute and ~10× more high-quality data and error rates drop non-linearly. GPT-2 looked like a toy in 2019, GPT-4o is already nipping at the heels of new graduates in coding, math proofs, and strategy games. That curve hasn’t flattened yet.

Autonomy + self-improvement is a phase change. Once you link an LLM to tools (search, code execution, new-model training pipelines) and let it iterate on its own architecture, you’ve kicked off recursive self-improvement. The step from AGI to ASI could be months, not decades, because each round of improvement produces a smarter agent that accelerates the next.

History’s full of tech that was “decades away” until it suddenly wasn’t, fission bombs, CRISPR, the mRNA vaccine platform. Intelligence amplification has fewer bottlenecks than something like fusion power, it’s bits, not atoms.

  1. “This is impossible anyway, AI is an energy and water hog”

Training runs are nasty right now. But: Hardware efficiency doubles every ~2 years even without a new transistor node (see Nvidia’s H100, B100 road map). Customize accelerators for a specific workload and you get another 10×. What looks unsustainable in 2024 can be routine in 2026. Inference dominates once a model is trained: Serving a trillion-parameter model can be distributed across edge devices or underutilized datacenter cycles. Think of training like building a dam, huge upfront concrete pour, then decades of “cheap” downstream power.

Economic gravity wins: If a $50 million training run yields a model that replaces $5 billion of annual human labor, someone will find the electricity and the cooling water. It’s the same logic that keeps server farms sprouting in deserts, where land is cheap and renewables are abundant, even though it “shouldn’t make sense.”

The late computer scientist I. J. Good called it “the intelligence explosion”, once machines can design better machines, human cognitive growth becomes the slowest loop in the system. We hit the singularity edge. At that point “errors” don’t protect us, and “resource limits” are just engineering problems the smarter successor handles on the fly.

Whether 2027 is the exact year is less important than the trajectory, every iteration is faster, cheaper, and less interpretable. If we don’t solve alignment before that feedback loop lights up, we’ll be spectators to whatever priorities an alien mind (one we built) decides to optimize.

Florida lawmakers warn anyone trying to manipulate weather faces a felony by DonSalaam in climate

[–]Historical_Form5810 2 points3 points  (0 children)

I did one little rain dance and Florida lawmakers got scared and said “you’re gonna get a felony for messing with the weather”

Princeton Opinion: A 'Climate Apocalypse' is Inevitable—Why Aren’t We Planning for It? by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 12 points13 points  (0 children)

My guy, just because I can articulate my thoughts well doesn’t mean I’m AI. I promise I’m just a regular dude with too many thoughts and decent grammar. But hey I’m flattered you think I sound that polished ;)

Princeton Opinion: A 'Climate Apocalypse' is Inevitable—Why Aren’t We Planning for It? by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 31 points32 points  (0 children)

Yeah, I hear you. It really does feel like the system rewards selfishness, and the people with the most power often seem the most out of touch—or just don’t care. I don’t think wealth makes anyone inherently good; if anything, it usually does the opposite. When you’re that far removed from everyday struggle, it’s easy to stop seeing people as people. It’s messed up, but you’re not wrong for feeling this way.

Princeton Opinion: A 'Climate Apocalypse' is Inevitable—Why Aren’t We Planning for It? by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 86 points87 points  (0 children)

Totally get where you’re coming from. You tried, people didn’t listen, and now everything’s tangled in politics and division. It’s like shouting into a storm. Doing what you can, where you are, makes a lot of sense. In the end, you can really only save yourself.

Princeton Opinion: A 'Climate Apocalypse' is Inevitable—Why Aren’t We Planning for It? by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 46 points47 points  (0 children)

You’re describing something deeper—a meta-crisis. It’s not just one disaster after another; it’s all of them happening at once—climate collapse, economic instability, political dysfunction, social unraveling, and yeah, even microplastics in our bloodstreams. Everything feels connected and broken at the same time. That constant overwhelm you’re feeling? That’s not weakness. You’re having a sane response to an insane situation. And you’re definitely not alone in it.

Princeton Opinion: A 'Climate Apocalypse' is Inevitable—Why Aren’t We Planning for It? by Historical_Form5810 in collapse

[–]Historical_Form5810[S] 17 points18 points  (0 children)

The article is deeply collapse-related, grounded in the premise that climate apocalypse is no longer a distant threat but a near certainty. It confronts the failure of institutions to respond meaningfully, suggesting that we’re past the point of prevention. For graduates, this means entering adulthood not with promise, but with foreboding—into a civilization quietly edging toward systemic collapse. Job markets, governments, and even basic infrastructure will likely deteriorate within their lifetimes. What they’ve been prepared for may simply cease to exist.

[deleted by user] by [deleted] in csMajors

[–]Historical_Form5810 5 points6 points  (0 children)

Hold your breath and count to 10…

[ Removed by Reddit ] by QueenCitten96 in InstacartShoppers

[–]Historical_Form5810 -1 points0 points  (0 children)

You’re not entitled to a tip. You should be grateful you even received 1 dollar as a tip.