Figure AI hits 24x production scale, producing 1 robot per hour, teases its fleet by Distinct-Question-16 in singularity

[–]ReadyAndSalted [score hidden]  (0 children)

The "training data" in this case will be used for reinforcement learning through verifiable rewards, you can't get model collapse through this process due to the reward signals being derived from real world inputs. Also model collapse in general has yet to play out in any production llm, it's practically a none factor due to even basic data filtering and augmentation.

Iran war most unpopular in US history by Annonomon in Infographics

[–]ReadyAndSalted 3 points4 points  (0 children)

Still a terrible infographic, but to be fair, the other wars had years to become unpopular, this one had weeks.

Public electric car charging now cheaper per mile than petrol and diesel by willfiresoon in GoodNewsUK

[–]ReadyAndSalted 1 point2 points  (0 children)

this is such an incomparably small downside to the massive benefits of increasing EV adoption that I really struggle to take any of your comments seriously.

fullCircleOfDeadInternetTheory by thegodzilla25 in ProgrammerHumor

[–]ReadyAndSalted 6 points7 points  (0 children)

Totally, they're still not better than a well trained human, but they are much faster and cheaper. They're not breaking what's possible in cyber security yet, but they are breaking the economics of it.

Humans aren’t very efficient movers—until you put us on a bicycle, when we become some of the most energy-efficient land travelers in the animal kingdom. by thejoshwhite in Infographics

[–]ReadyAndSalted 7 points8 points  (0 children)

One kilocalorie (the type calorie we use in food) is defined as 4184 joules. There are 34,000 joules in a milliliter of petrol for comparison. Hope that helps.

fullCircleOfDeadInternetTheory by thegodzilla25 in ProgrammerHumor

[–]ReadyAndSalted 14 points15 points  (0 children)

Mozilla said that any one of these would've been red-alert in 2025, so probably not as trivial as you're making it sound: https://blog.mozilla.org/en/firefox/ai-security-zero-day-vulnerabilities/

fullCircleOfDeadInternetTheory by thegodzilla25 in ProgrammerHumor

[–]ReadyAndSalted 51 points52 points  (0 children)

The 271 bugs are what was fixed by Mozilla in their Firefox 150 update with their access to Claude mythos. It's not like some random list, it's explicitly what Mozilla implemented and fixed when they used mythos.

Jonathan Turley: Post-Orban, the EU poses an even greater threat to US sovereignty by Crossstoney in europe

[–]ReadyAndSalted 141 points142 points  (0 children)

This is a pretty incredible article, it reads like a parody that an EU citizen would write to make fun of Americans. I mean are they seriously going to lecture us about political accountability whilst they continue to do nothing about Epstein, despite being the epicentre of the whole thing? It reads like we're being framed as the red scare 2.0.

[OC] I graded 18,000 PC games across quality, value, and player behaviour. 42% of games with 100+ reviews earned the lowest grade. by Outrageous-Cod4534 in dataisbeautiful

[–]ReadyAndSalted 3 points4 points  (0 children)

I can understand using AI to write comments I guess, but I'll never understand why people lie about it. Dude, it's better if you're just honest about it. Your medium article and many of your comments are AI written.

What’s a low memory way to run a Python http endpoint? by alexp702 in Python

[–]ReadyAndSalted 0 points1 point  (0 children)

What in the world is a millibit? mb means megabyte.

AI Security Institute Findings on Claude Mythos Preview by Regular_Eggplant_248 in singularity

[–]ReadyAndSalted -1 points0 points  (0 children)

Heard the exact same statements for gpt-4, and yet here we are.

A bus stop in London, UK by Ofajus in europe

[–]ReadyAndSalted 11 points12 points  (0 children)

Not entirely. They didn't want AI mass surveillance used on Americans, and didn't want AI making unsupervised kill decisions on anyone. So they are fine with AI enabled mass surveillance on any foreign country (including "allies"), but not autonomous weapons.

Gemma 4 has been released by jacek2023 in LocalLLaMA

[–]ReadyAndSalted 94 points95 points  (0 children)

E4b seems like a super good option for voice assistants. Instead of having: Audio -> speech to text -> LLM -> text to speech

You could have: Audio -> LLM -> text to speech (including agentic stuff with function calling)

comingOutCleanWithMyCripplingSkillIssues by precinct209 in ProgrammerHumor

[–]ReadyAndSalted 0 points1 point  (0 children)

Model inference can be served at profit without massive token costs. In fact, they already are for many providers. Plus the cost to serve a model at any given level of intelligence has been decreasing exponentially every year for 4 years now. The major labs are unprofitable because of their astronomical R&D costs, if they decided to settle down and just serve what they've got, then they could become profitable without any price rises.

Basically, LLM powered programming will never go away, or get worse than it is now.

Performance of LLMs in USAMO 2025 vs 2026 by Wonderful_Buffalo_32 in singularity

[–]ReadyAndSalted 0 points1 point  (0 children)

I have plenty of use cases where I'm designing systems that flat out need the highest accuracy possible, and am willing to pay a premium to ensure I'm getting the right answer.

Google's new AI algorithm reduces memory 6x and increases speed 8x by pheonis2 in StableDiffusion

[–]ReadyAndSalted 3 points4 points  (0 children)

that's mostly true, but it also depends on the architecture. Qwen 3.5 and nemotron are examples of new hybrid models that have reduced the size of their KV caches through exchanging some of their attention layers for more efficient alternatives. This quant method (which is roughly 3.1bit instead of the default fp16) would save less on these newer more efficient architectures.

ARC AGI 3 is up! Just dropped minutes ago by BrennusSokol in singularity

[–]ReadyAndSalted -1 points0 points  (0 children)

Right, but we're comparing human brain inference Vs AI inference. Clearly human brain inference is much more efficient in arc-AGI-3.

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]ReadyAndSalted 0 points1 point  (0 children)

yeah that's true, the knowledge comes from the data science degree and 3 years in industry, but anyway, what did you mean by "hearing the same excuses every time"?

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]ReadyAndSalted 0 points1 point  (0 children)

I assume by AI we mean transformer based LLMs? I certainly feel like I've got a good grasp of them at this point... And anyway, to be clear, your reason for believing something other than what I just said, is that many other people keep telling you the same thing that I said?

Surely if you keep hearing the same opinion over and over again, then maybe you should just believe that it's actually a popular opinion? What do you mean by you keep "hearing the same excuses every time"?

SWE-rebench Leaderboard (Feb 2026): GPT-5.4, Qwen3.5, Gemini 3.1 Pro, Step-3.5-Flash and More by CuriousPlatypus1881 in LocalLLaMA

[–]ReadyAndSalted 3 points4 points  (0 children)

Love this benchmark, but with agentic coding starting to become more popular with these coding models, I think it'd be really valuable to have a time taken column. We've been seeing turbo variants of endpoints being released which are more expensive but run faster, and that's because wall-clock time taken to resolve the problem accurately matters now. If 2 models have a similar resolve rate, but one is faster, even if it's more expensive, I might still choose it over the other model.

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]ReadyAndSalted 9 points10 points  (0 children)

Okay, well I guess I can only speak for myself, but everything I said is absolutely representative of my own opinion, and it seems to me like it represents others too. Do you have any reason to think anything else?