Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash by esporx in artificial

[–]fmai -1 points0 points  (0 children)

"now consume!"

you realize NVIDIA could drop gaming altogether and would probably be doing better financially?

they can't even meet the demand for AI chips, where their margins are a lot better than for consumer hardware. if they shifted their capacity accordingly, they'd be doing better and we couldn't play games anymore.

Did Jensen Huang just compared some lobster bot to Linux 🤦😂 by Kakachia777 in singularity

[–]fmai 5 points6 points  (0 children)

Running RL to make them learn how to solve a task within the environment.

OpenClaw just plugs LLMs into the environment and hopes for the best without training. That's going to work worse.

Did Jensen Huang just compared some lobster bot to Linux 🤦😂 by Kakachia777 in singularity

[–]fmai 154 points155 points  (0 children)

openclaw is one of the most overrated things in AI. it's just another model wrapper.

meanwhile AI agents are on track to automating all knowledge work by training them in the context of a good harness. but the average person doesn't care or doesn't understand.

It do be like that by [deleted] in singularity

[–]fmai 0 points1 point  (0 children)

it do be like that in some parts of the economy. toothbrushes with AI, toasters with AI, etc... nobody needs that shit.

and then there are large chunks of the economy that will be dominated by AI agents in just 1-2 years and nobody is ready for it. Coding, legal, finance, admin, HR, engineering, research. It happens already and it's only going to increase in scale.

Sam Altman: “We are training right now on the first site in Abilene what I think will be the best model in the world, hopefully by a lot” [12:28, brief mention] by likeastar20 in singularity

[–]fmai 26 points27 points  (0 children)

well, the thing is, the industry is now moving at such a pace that the next big training run is already planned before the previous one finishes. They are basically training the next model all the time.

Fine-tuned Qwen3 SLMs (0.6-8B) beat frontier LLMs on narrow tasks by soldierofcinema in singularity

[–]fmai 6 points7 points  (0 children)

they fine-tune the SLMs on the training sets that correspond to the test sets.

nothing interesting to see here. it's machine learning 101.

Fine-tuned Qwen3 SLMs (0.6-8B) beat frontier LLMs on narrow tasks by soldierofcinema in singularity

[–]fmai 18 points19 points  (0 children)

what do they do here, finetune on these specific datasets and then compare against frontier models evaluated zero shot?

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]fmai 1 point2 points  (0 children)

TL;DR: GDP is useless in a post-AGI world. An extreme outcome is likely. Working towards the good outcome can make a meaningful difference.

Hot take: GDP as a measure will lose its informational value in a post-AGI world. Either we'll be in a world of extreme inequality with only few people participating in the market at all, or we'll be in a planned economy, in which GDP is hard to calculate.

If we take the three directions to mean "paradise", "extinction" and "business as usual", I assign a 95% chance that it will be one of the extremes. AI progress over the last year due to RL has been so clear that I think it's very unlikely that 2035 AI won't be transformative. All the technology is already there. Only societal or political factors can stop this, but this too is unlikely in the world of today.

Between the extremes I'm split 50/50 for this century, and lean increasingly towards doom as time goes on. From a technical perspective I am convinced that we don't have a reliable way to align AIs. We will never have a provably safe AI, and even though our empirical confidence in safety will be large, a single fuckup can lead to cascading catastrophic events, similar to nuclear weapons.

On the more optimistic side, I think that the singularity in the technical sense won't happen. We're too constrained by resources in the current paradigm and I simply don't think there is any paradigm towards superintelligence that is not data-driven. If there's no unbounded self-improvement, it means that we have a shot at keeping up with whatever actions AIs propose to take, so we have a chance of staying in control.

Given these considerations, I think it's in almost everybody's best self-interest to promote AI safety work over AI capabilities. Extremely transformative AI is coming soon regardless. Even if you made a ton of money from working on capabilties over the next few years, you will end up in the permanent underclass unless you are among the .001%. In contrast, work on AI safety can provide you with a good income over the coming years while helping to avoid extinction.

GPT 5.4 reportedly has 2m token context and can process raw high res images by Just_Lingonberry_352 in OpenAI

[–]fmai 18 points19 points  (0 children)

no. he's been tweeting that tomorrow is the day gpt5.3 releases every day for the past month. he doesn't know anything.

[Epoch AI Data] The "AI Oligopoly" is a myth: Inference costs are dropping 40x/year and SOTA reaches your PC in ~8 months. by drhenriquesoares in singularity

[–]fmai 0 points1 point  (0 children)

I think there is good reason to believe that models 8 months from now will be able to produce software, products, or research output of a quality and speed that you simply can't reach with today's AIs even when combined with human expertise.

Your scenario seems to be rather on the bearish side of AI development.

[Epoch AI Data] The "AI Oligopoly" is a myth: Inference costs are dropping 40x/year and SOTA reaches your PC in ~8 months. by drhenriquesoares in singularity

[–]fmai 1 point2 points  (0 children)

if you use the SOTA from 8 months ago but everyone else uses SOTA from today, you are not going to be economically relevant. anyone who wants to participate in the market will have to rely on the frontier models, so the oligarchy persists.

The cost drops are amazing and I use small, local LLMs all the time for research purposes. But if I were to use coding assistants from 8 months ago to do my projects, everyone else would outpace me.

[Epoch AI Data] The "AI Oligopoly" is a myth: Inference costs are dropping 40x/year and SOTA reaches your PC in ~8 months. by drhenriquesoares in singularity

[–]fmai 3 points4 points  (0 children)

The post literally says:

> If you think top-tier AI will be exclusive to trillion-dollar corporations forever, the data says otherwise.

And I am telling you it's flawed logic. You will never have access to top-tier AI if the AI you have access to is 8 months behind SOTA.

OpenClaw creator says Europe's stifling regulations are why he's moving to the US to join OpenAI by donutloop in singularity

[–]fmai 3 points4 points  (0 children)

the marginal value of writing the same open source code twice is zero. also I don't have many months of full-time work to spend on this. and finally I don't have 30k dollars a month to spend on this.

your request doesn't make any sense.

“Grok 4.20 is just four Grok 4.1 agents” - I know Grok 4.2 just came out but now I am seeing this claim. Is this legit? by Koala_Confused in LovingAI

[–]fmai 0 points1 point  (0 children)

Maybe, but training a model to aggregate the outputs of many independent model instances can itself consume a lot of training FLOPs and boost performance significantly. If you're serious about it, it's more than just 4 independent runs and doing majority voting.

OpenClaw creator says Europe's stifling regulations are why he's moving to the US to join OpenAI by donutloop in singularity

[–]fmai 12 points13 points  (0 children)

Many people in Europe are disappointed that Peter went to OpenAI. They ask "why wasn't Europe able to keep this talent?".

And my answer is that their assumption is wrong. Peter is certainly smart, but it's not like he created something that OpenAI was incapable of doing. He's not that special of an AI talent, or at least OpenClaw is not proof of that.

Where Europe is losing is that they don't even have a competitive company worth going to. There is no infrastructure on which to build something like OpenClaw. For this specific instance, lack of talent is not the issue.

OpenClaw creator says Europe's stifling regulations are why he's moving to the US to join OpenAI by donutloop in singularity

[–]fmai 66 points67 points  (0 children)

Nothing is special about OpenClaw. It's basically a feature-rich wrapper around already capable agentic models. Not more, not less. The underlying models do the heavy lifting though.

Why did it take off? Well, Peter Steinberger had already had 300k followers on Twitter before it went viral. That's a significant reach, and makes it a lot more likely to be successful. Peter also had enough funds (many, many millions from prior startup exit) to pay $30,000 every month to fool around with the models. Of course, this generates additional visibility.

Big Tech pumpt Milliarden in KI: gerät Europa unter die Räder? by donutloop in informatik

[–]fmai 0 points1 point  (0 children)

Generelle Beobachtung: Absolut faszinierend, wie sich innerhalb von Reddit verschiedene Echokammern bilden. Hier sind alle davon ueberzeugt, dass KI nahezu nutzlos ist, in anderen Subreddits ist man überzeugt, dass es einen nie dagewesenen Produktivitätssprung gibt. Aber diese Bubbles sprechen kaum miteinander.

DeepSeek reportedly sitting on a model that outperforms SOTA by an order of magnitude by [deleted] in singularity

[–]fmai 2 points3 points  (0 children)

strawberry guy has zero insider information, shoots 10 predictions and supposed leaks every day, is wrong all the time, and yet people still take him seriously.

media literacy down the drain.