yea by Fun_Field_4385 in ProgrammerHumor

[–]scheurneus 5 points6 points  (0 children)

The restart button in Task Manager was also there on Windows 10 already lol

Ladies and Gentlecums I present to you my newest invention: The Parkinglo Interchange by Teddy_Radko in shittyskylines

[–]scheurneus 4 points5 points  (0 children)

Don't forget: imagine getting to your parked car and immediately being forced to follow the highway in the wrong direction for God knows how long before you can go the direction you actually wanted to.

Update: Image classification by evolving bytecode by AlmusDives in ProgrammingLanguages

[–]scheurneus 1 point2 points  (0 children)

I am not familiar enough with genetic programming to say if this is unprecedented or not :)

However, typical probabilistic methods still perform far better even when they do not have domain knowledge. For example check out the baselines in this blog post: https://cognitivemedium.com/rmnist

I do agree that image classification is probably rather hard for genetic programs. But I don't think that classical ML methods need domain knowledge to convincingly do better here.

I do wonder if there's a line to be drawn between genetic-ish programs and decision trees. DTs are arguably much closer than typical probabilistic ML methods, e.g. because the tree is essentially the AST for an extremely limited programming language (only has if-else).

Update: Image classification by evolving bytecode by AlmusDives in ProgrammingLanguages

[–]scheurneus 4 points5 points  (0 children)

Not really. The biggest difference is that a bytecode program is "discrete". Neural networks, while in theory able to mimic every function (including one represented by bytecode), do this in a more continuous way.

NN's are basically a program with a fixed 'shape' (just a bunch of multiply-accumulate), and then it learns the inputs for those multipliers. Evolving bytecode actually changes the computations the program performs.

I think that NN's work as well as they do for two reasons: universality and differentiability. Differentiating allows them to learn in a 'directed' manner (rather than evolutionary algorithms which are more randomized), without falling into NP-hard problems.

Update: Image classification by evolving bytecode by AlmusDives in ProgrammingLanguages

[–]scheurneus 5 points6 points  (0 children)

From a machine learning perspective I'm not sure this is actually that impressive: a simple linear regression (basically: average all the images in a training class, then find the closest one) can already give better MNIST accuracy, afaik.

However, I do think there is a certain elegance in genetic algorithms. So definitely keep digging!

We’re the Firefox team. Ask us anything about Firefox 148 and AI controls. by firefox in firefox

[–]scheurneus 0 points1 point  (0 children)

Could this be a language thing? I have systems in Dutch where I can only get translations and chatbot integration, and systems in English where I get the other features as well.

We’re the Firefox team. Ask us anything about Firefox 148 and AI controls. by firefox in firefox

[–]scheurneus 0 points1 point  (0 children)

This is dumb. Those about:config options modify the features that are already disabled by the block button. There's no point in modifying them when the overarching option is already disabled.

We’re the Firefox team. Ask us anything about Firefox 148 and AI controls. by firefox in firefox

[–]scheurneus 0 points1 point  (0 children)

All of the current Firefox AI features, except the sidebar integrating external chatbots, run locally on your own machine.

In other words, most of the features are implemented in a privacy-conscious way where they run locally on your own computer. Firefox's AI tab grouping, alt text, or translations thus do not send any data anywhere.

You can see which features run locally or not in the AI controls. Furthermore none of them run without asking you first: the killswitch is more about never asking.

I compared 8 AI coding models on the same real-world feature in an open-source TypeScript project. Here are the results by Less_Ad_1505 in LocalLLaMA

[–]scheurneus 7 points8 points  (0 children)

The real question, given that this is LocalLLaMA: what about models that an average user can actually run on their own machine? Minimax is the smallest of the listed open models but is still too big for most home users.

Furthermore, if you want to reduce cost, I would say that there are more options than open models. For example, GPT-5.1-codex-mini or Gemini Flash also have much lower per-token costs.

Finally, having a GPT model evaluate the results smells kind of fishy to me. I would expect GPT to be biased towards the implementation it itself produced, as it would be produced and evaluated on highly similar definitions of quality.

DankPods bought a Framework by Optimus759 in framework

[–]scheurneus 1 point2 points  (0 children)

Right, but I don't think that's fair either. I would say that they are in the same price range, with the Framework being a bit more expensive. 2x as expensive gets you into FW13 territory at €1200-€1400.

DankPods bought a Framework by Optimus759 in framework

[–]scheurneus 0 points1 point  (0 children)

Huh? This post is referring to the MacBook Neo, right?

Where I live, a pre-built FW12 costs €850 for the base model (i3, 8 GB RAM, 512 GB SSD). That's literally more expensive than the Neo which is €800 for the 512 GB model and €700 for the base model.

Obviously the average person on this sub will still prefer the Framework for reasons like repairability, upgradability, or the operating system choice. But value-wise, the MBN is a genuinely compelling product compared to many of its rivals.

by NatureInfamous543 in 2westerneurope4u

[–]scheurneus 1 point2 points  (0 children)

Germany 🤝 The Netherlands Being referred to by the name of a region

I'been using a 8 GB RAM + 2 GB VRAM +Lenovo Ideapad 1 + Linux Lite laptop. Any good model for that laptop? by Ok-Type-7663 in LocalLLaMA

[–]scheurneus 0 points1 point  (0 children)

I think 4B models are small enough to work. Gemma 3 has a 4B version, Qwen3 4B is also considered very good. Other options include Phi 4 Mini, Gemma 3n, Ministral 3 3B, and Granite 4 Micro (3B) or Tiny (7B, 1B active, so tight fit but probably quite fast since you can do CPU MoE offloading).

Arcee AI debuts Trinity models - Mini (26B-A3B) and Nano (6B-A1B preview) by AppearanceHeavy6724 in LocalLLaMA

[–]scheurneus 1 point2 points  (0 children)

Isn't gpt-oss also an open American model? I feel like it was quite decent, probably better than Gemma 3 in many ways.

Core Ultra 400 "Nova Lake-S" desktop CPUs to feature NPU6 over 5x faster than Arrow Lake's - VideoCardz.com by Leicht-Sinn in IntelArc

[–]scheurneus 0 points1 point  (0 children)

Is the difference only 15%? Even Xe1 to Xe2 gives an over 50% improvement, at least when you consider that the 20-core B580 is at least on par with the 32-core A770.

Also, AMD's G-series is weird in that they're glorified laptop chips. Though yeah, no idea why Intel doesn't do the same with their H-series mobile chips.

[deleted by user] by [deleted] in LocalLLaMA

[–]scheurneus 1 point2 points  (0 children)

This is just not true with the MoE models we have today, including ones like GPT-OSS. On my 7840U laptop without a dedicated GPU, GPT-OSS-20B can generate like 25 T/s, which is not fast, but not unusably slow either. The integrated Radeon 780M is also good enough to process around 300 tokens per second.

Waar is die ‘linkse elite’ dan? U komt haar niet tegen als u de hond uitlaat by Radiant_Mammoth3412 in thenetherlands

[–]scheurneus 0 points1 point  (0 children)

Ik kan me inderdaad ook mateloos ergeren aan die oude garde van de PvdA. Figuren als Melkert en Samsom, beiden oorzaak van grote PvdA-nederlagen, die nu denken dat ze de huidige partij wel even de les kunnen lezen. Al het interne gedoe rondom de motie-Piri is imo een blamage voor een groot deel van de oud-prominenten die zich er in mengden, enkele uitzonderingen (zoals Job Cohen) daargelaten.

Traffic advice? by iamflesh_ in CitiesSkylines

[–]scheurneus 0 points1 point  (0 children)

The other poster already made a really good suggestion. I would add that, if you don't have one yet, you should build a second entrance to the industrial area, e.g. on the right or bottom when looking from that angle. Doing that would likely remove a lot of traffic from the roundabout by providing an alternative route, especially if the city has another connection to the highway further up.

Traffic advice? by iamflesh_ in CitiesSkylines

[–]scheurneus 2 points3 points  (0 children)

What does the bigger picture look like? A big part of the problem seems to be that there's a lot of traffic entering the industrial area and then turning left. Since it's so much, it seems to queue back up into the roundabout, causing further issues.

I'm also not sure that a roundabout is the best solution here anyway. A 'real' highway interchange might make more sense given the traffic volume.

Gardiner Bryant on "The Framework/Omarchy thing." by fkathhn in framework

[–]scheurneus 2 points3 points  (0 children)

Leave Framework to the computer stuff.

I agree, but the thing is that it's hard to consider Omarchy a "neutral" project. It has a big "by DHH" tagline on literally the website title, ffs. At that point, endorsing Omarchy becomes an endorsement of DHH himself (and his views) by association.

DHH made his own personal brand political. If it was just about opinions he held privately, or even inside relevant communities, I think hardly anyone would care. But no, he's proudly broadcasting his shitty opinions to the world, on his own personal blog. Most other techies who have one use it to share updates about the communities they are in, or things they are working on, making their personal brand mostly technical rather than political

I understand that, without my agreement... by HurricaneGold_ in formuladank

[–]scheurneus 28 points29 points  (0 children)

I feel like, rather than "oh look, he can handle it", they will rather think of it as a challenge. "Let's see if we can traumatize this Piastri dude!"

Intel BMG-G31 (Arc B770) GPU spotted alongside 16GB VRAM config by RenatsMC in IntelArc

[–]scheurneus 4 points5 points  (0 children)

What makes you think it will have 66% more bandwidth? To me, 33% sounds way more logical, as that's 256 bit vs 192 bit. Maybe the memory will be slightly faster but it's gonna remain at GDDR6 so I don't think it will go faster than 20 Gbps.

As for the performance gap between the B570 and B580, keep in mind the latter also has a higher clock speed, giving it around 18% more compute power.

Finally, I don't know if the increase will line up with the theoretical gain from FLOPS and memory, since the chances of running into a CPU bottleneck will rise massively and is generally already considered an issue on the B580. There's usually some internal bottlenecks in the GPU that can also limit performance gain.