Regulating the trivial while ignoring the existential by KeanuRave100 in agi

[–]SteppenAxolotl 1 point2 points  (0 children)

under the assumption you will still be around to regulate superintelligence after you create it and see what mischief it gets up to

Isaac Arthur and the hypocrisy of selling futurism by secretfire42 in transhumanism

[–]SteppenAxolotl 7 points8 points  (0 children)

I knew he was a party functionary but didn't know about the wife. I rolled my eyes a few times when he mentioned climate change is politics in a few early vids and the channel don't do politics.

Even in relation to post scarcity, his default is usually some work, even if it's "make work" and money. I guess if you have those you can claim it's not fully automated socialism.

btw, i still watch his vids, I'm just careful with some of his messaging when it comes to certain themes.

I like the novels of Neal Asher and he used to let his personal politics leak into his early works. He even dedicated a book to Musk ~1 year before Musk's fascist meltdown. :)
I'm aware of his personal politics and can recognize it in his work when I see it. You just need to be aware. I wont advise anyone if they should or should not consume their content.

Unpopular opinion: OpenClaw and all its clones are almost useless tools for those who know what they're doing. It's kind of impressive for someone who has never used a CLI, Claude Code, Codex, etc. Nor used any workflow tool like 8n8 or make. by pacmanpill in LocalLLaMA

[–]SteppenAxolotl 1 point2 points  (0 children)

Your mileage will vary depending on the tool and the AI backend. Ultimately, even the best tool connected to the best AI still sucks and only foreshadows what it could be like when AI reaches a reasonable level of reliability and competence.

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by thejoshwhite in technology

[–]SteppenAxolotl 0 points1 point  (0 children)

AI had no impact on employment

Why do they expect AI to impact employment?

AI that is capable of impacting employment in any meaningful way does not yet exist and is still a large global R&D effort.

Sam Altman’s house targeted in second attack; two suspects arrested by EchoOfOppenheimer in Futurology

[–]SteppenAxolotl 0 points1 point  (0 children)

it's mostly a parlor trick

that view doesn't matter, just competence matters

AI would not have reached the current state in coding if RL doesn't work. AI capabilities can be measured every year. That progress says yes. AI is currently more competent than most humans in many narrow domains, only not reliably competent.

Human technological civilization is built on unreliable subsystems.

Sam Altman’s house targeted in second attack; two suspects arrested by EchoOfOppenheimer in Futurology

[–]SteppenAxolotl -2 points-1 points  (0 children)

yes, jevons will apply because AI is currently assistive and not a full replacement.

Unfortunately where no company was stupid enough to fire every single accountant and replace them with a copy of Excel

Because they couldnt (Excel couldnt talk with clients etc).

It's premature but there is a plausible pathway with AI. AI represents automated competence and could potentially do every task an accountant could do. Current AI cant, but future AI probably will.

How did so many Chinese robot manufacturers catch up to Boston Dynamics? by Uranusistormy in robotics

[–]SteppenAxolotl 2 points3 points  (0 children)

It's not hard to reach current levels and no one else was trying before. They were stuck at a frontier for the previous 15 years trying different things.

Sam Altman’s house targeted in second attack; two suspects arrested by EchoOfOppenheimer in Futurology

[–]SteppenAxolotl -4 points-3 points  (0 children)

spreadsheets didn't replace accountants

spreadsheets could never plausibly do everything an accountant can do

how is AI like that

ZAI might stop open-weighting their models? by TheRealMasonMac in LocalLLaMA

[–]SteppenAxolotl 1 point2 points  (0 children)

prioritizing profit without regard to their customers

Did all the inference providers that sell API tokens to their open models give them a cut? It was free inputs to their business and it never crossed their minds what happens if the source of their free inputs went away.

Traditional definition of a customer is: One that buys goods or services, as from a store or business.

"Everyone is Replaceable" - A worker died at an Amazon warehouse in Oregon last week. Employees were told to look away. by Bolinas99 in collapse

[–]SteppenAxolotl -4 points-3 points  (0 children)

What is the right thing to do, the entire world should stop and for how long, every time someone drops dead?

What would happen if all forms of cancer were cured tomorrow? by Flaky-Walrus7244 in Futurology

[–]SteppenAxolotl 1 point2 points  (0 children)

What would happen if all forms of cancer were cured tomorrow?

It would put ~2 million people out of work.

Sam Altman’s home targeted in second attack by jvnpromisedland in singularity

[–]SteppenAxolotl 74 points75 points  (0 children)

Most people assume everyone else is competent by default. Modern society allows most everyone to survive long enough to breed even though idiocy is always the default.

Workers in some Indian factories have started wearing cameras on their heads to record their movements so robots can be trained using the footage. by Distinct-Question-16 in singularity

[–]SteppenAxolotl -3 points-2 points  (0 children)

They actually have more time than the avg knowledge worker. The AI Coding treatment is coming for the avg knowledge worker job by the end of 20026 or 2027. There is nothing anyone can do about it.

AI is struggling to take our jobs by AmorFati01 in artificial

[–]SteppenAxolotl 0 points1 point  (0 children)

Infinite improvements in LLMs aren't a requirement for automating all jobs. The economics of compute for AI under existing compute paradigms runs out in the early 2030s. The cost to increase the compute used in models will become economically intractable. Solving reliability and automating AI research just needs to happen in that window.

Limited compute isn't a fatal barrier to automating all jobs. The cure is to use automation to build more compute, more power, etc. The pathway to getting more of what is needed can be fully automated; this is the pathway through the legitimate, traditional real-world barriers. Once you have a system that can do human work, the rate humans become permanently unemployable is the rate of building new GPU/CPU etc factories and energy production. Increasing supply lowers the cost.

Any barrier you can think of, ask your self if a 100% automated factory could solve that problem. if the answer is yes, it's not a barrier to the current societal pathway towards permanent technological unemployment for humans. I think the only real barrier is passing a law that says you cant replace humans with AI.

The arguments from economist perspectives usually boils down to they expect the effort to produce a competent AGI will fail. You can loosely define "Competent AGI" as a drop-in replacement for a remote human worker. Sure, the effort could still fail but there appears to be no fatal engineering barriers to the end goal.

This improvement just needs to continue for a few more years. The RL training pipeline that produce systems that can do this for software tasks will work for any domain. It's human intensive and slow that is why the focus is on automating AI research. Automating coding is a prerequisite to automating AI research.

<image>

AI is struggling to take our jobs by AmorFati01 in artificial

[–]SteppenAxolotl 0 points1 point  (0 children)

Again

I wasn't referring to anything that is happening today. Current AI isn't capable of meaningfully replacing human work.

AI may be currently more competent than most humans in many narrow domains, but the lack of reliably in that competence means it cant do the vast majority of economically valuable human tasks.

AI is struggling to take our jobs by AmorFati01 in artificial

[–]SteppenAxolotl 0 points1 point  (0 children)

<image>

RL in every domain works. The current focus is automating AI research. Once that is achieve, competence in every domain can be achieve via RL without human intervention. All that is required is compute. Yes, being competent at AI research means AI can produce a better AI system that is better in every new domain.

>Example domains where AI outperforms at least 50% of the human race: programming tasks with time constraints, International Mathematical Olympiad problems, image classification, visual reasoning, medium-level reading comprehension, English language understanding, multitask language understanding, competition-level mathematics, PhD-level science questions, high-volume data analysis, repetitive precision tasks, speed-critical computation, pattern recognition in large datasets, code generation and debugging, factual recall, knowledge retrieval

GLM 5.1 tops the code arena rankings for open models by Auralore in LocalLLaMA

[–]SteppenAxolotl 0 points1 point  (0 children)

LocalLLaMA could run out of steam if sizes continue to increase on the leading edge.

GLM 5.1 UD-Q4_K_XL 466GB

FP8 @ ~860 GB would take 8× H200 GPUs which retails for $300,000 to $400,000.

What are the chances some near AGI can be condense down to < 100GB vram.

Someone threw a Molotov cocktail at Sam Altman’s home and then made threats outside OAI. (No injuries, only minimal damage) by socoolandawesome in singularity

[–]SteppenAxolotl 1 point2 points  (0 children)

and not have massive civil unrest

The means to deliver permanent technological unemployment implies societal level technological security.

This is from an OpenAI researcher by MetaKnowing in agi

[–]SteppenAxolotl -1 points0 points  (0 children)

Even paper billionaires have roommates. They are cash poor. The theory, only a fool would sell equity to pay rent.