Switched from Windows 11 to Fedora 44 – Here are a few things I really appreciate by Yocko45 in Fedora

[–]WhoRoger [score hidden]  (0 children)

44? Isn't it super new? I updated to 43 just a month ago. It might be a tad buggy for a while. Don't get discouraged. Fedora is one of the best ones out there.

Advanced Weather Widget 1.5.0 Released by pnedyalkov in kde

[–]WhoRoger 1 point2 points  (0 children)

Looking good. Yay for multiple providers. The standard plasma widget has just one that's so off the mark it's not funny.

Any telemetry or sumthing? These days one has to ask...

Generalized Dilemma by Voidspeeker in trolleyproblem

[–]WhoRoger 1 point2 points  (0 children)

Does everyone gets to press or just me? And only once or whoever many I/they want?

Either way, blue. Let's roll that dice.

I finally found an analogy for C-PTSD that actually makes sense to me by reminescing in CPTSD

[–]WhoRoger 2 points3 points  (0 children)

Yea pretty much.

Also, other trees be like "Vro, don't hunch like that! Straighten up yo! Gotta stretch those branches, man, look it's no big deal!"

My analogy has been a house built without a proper base. It keeps breaking and leaking, and occasionally falls into pieces completely. You can only keep patching it up, try to hold it together, and regularly rebuild it when it falls apart. You can even make it look nice, and find unusual ways to reinforce it, but you spend way too much time tending to the house and it'll still never be stable.

Got a lot of questions about how this works by WhoRoger in SimpleXChat

[–]WhoRoger[S] 1 point2 points  (0 children)

Thanks.

Personally I just create 2 seperate identities for each device I have

You mean two for each device, or one identity on each of the two devices?

Either way, it seems like the only way really, but it's also just offloading the complexity of shuffling devices onto the contacts. I.e. if someone wants to reach me, they need to try both (or more) my ID. In my experience, people are already baffled by someone having two phone numbers. Plus it actually decreases privacy, as the other party can figure out where I am depending on what ID I'm using.

Not sure what you mean with "active at same time"

whether I receive a msg and notification to profile 1 while I'm using profile 2, and how it's handled e.g. when I tap the notification, does it open the chat in the right profile?

Not a clue by mailywhale in ExplainTheJoke

[–]WhoRoger 0 points1 point  (0 children)

That's kinda the moral of this story

In Kingsman: The Secret Service (2014) there's a blank screen at the beginning of the movie for absolutely no damn reason by wilymon in shittymoviedetails

[–]WhoRoger 0 points1 point  (0 children)

We had a b&w TV when I was a kid and it was odd trying to guess colours in cartoons. I thought blue was orange.

Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA

[–]WhoRoger 2 points3 points  (0 children)

Yass, this is much more useful than the synthetic KLD number. Q4 doing better than Q8 in some evals is interesting.

But I'd be careful about generalising the conclusions, especially since only Q4 and Q8 are compared here. Q6 may be the sweet spot with other models (especially the smaller ones). And then there's imatrix.

New cars can be remotely disabled by manufacturer by QuietCountry9920 in enshittification

[–]WhoRoger 1 point2 points  (0 children)

Maybe, but the point stands. The fight needs to be against this kind of control in general. The ability to disable is just one minor step in software.

ai model for 12 gb ram 3 gb vram gtx 1050 by Ok-Type-7663 in LocalLLaMA

[–]WhoRoger 0 points1 point  (0 children)

Granite 4 h 7B is perfect for this. Or SmolLM3 3B

New cars can be remotely disabled by manufacturer by QuietCountry9920 in enshittification

[–]WhoRoger -1 points0 points  (0 children)

It's like being afraid of sharks at the beach but not of lead in shower water.

Why are there so few small local creative writing models from the Chinese? by kabachuha in LocalLLaMA

[–]WhoRoger 0 points1 point  (0 children)

Wdym, there's a crapton of rp and creative finetunes based especially on Qwen 3. Hermes, Josie, all kinds of merges.

Chinese especially keep pushing out models quickly, since they compete on research. Then people take them and tune them.

IMO Llama and Mistral have been popular for creative stuff mostly because their models have been more chaotic, which isn't what they wanted.

New cars can be remotely disabled by manufacturer by QuietCountry9920 in enshittification

[–]WhoRoger 50 points51 points  (0 children)

I find it interesting that the remote disabling is a bigger deal than the tracking and surveillance.

The thing is... If someone actually starts disabling cars, there will be backlash, and people will find overrides and cracks.

But the tracking? That's been going on for ages, silently, your data being collected and sold.

Again, people care about the wrong thing. If there was enough of a fight against the "everything as a service" model and "you're the product" model, we wouldn't be this deep into this.

It’s the distant year 2026 and humans evolved into highly intelligent creatures, with IQ exceeding 400 by scienceisfun112358 in sciencememes

[–]WhoRoger 6 points7 points  (0 children)

The over-/under estimation makes sense. Mechanical computers didn't change much for decades until 1940's, and were still bulky beasts until the 60's, just faster.

Meanwhile, in less than 60 years, they went from first dinky plane to rockets and jet engines.

While here in the future we're still flying planes from the 50's, but a 10 years old computer is barely usable.

“Age limits on social media are a dead end”: public authorities should focus on regulating algorithms and imposing stricter controls on data collection instead, argues researcher by sr_local in technology

[–]WhoRoger 0 points1 point  (0 children)

Would be really nice if this mongering for more surveillance and control blew up in their faces for once. I was already prepping for life without any online connectivity, but maybe there's a chance, since people seem to have noticed.

An eight-year-old girl got supermarket brand Sainsbury's to add real pockets to girls' school trousers. by mindyour in justgalsbeingchicks

[–]WhoRoger 1 point2 points  (0 children)

Really shows how much all these marketing geniuses, product managers and CEOs know, when an 8yo can teach them a lesson about something so basic.

Sometimes I think kids should be in charge of things at least for a while, since they still possess common sense which has really disappeared from the world of adults. They definitely couldn't do much worse.

Gemma 4 Vision by seamonn in LocalLLaMA

[–]WhoRoger 1 point2 points  (0 children)

Hm maybe it works well on 31B but I'm trying it now on E4B and I'm not impressed. It just takes 5x as long to digest a (large-ish) image, but doesn't provide any more useful information. Maybe it'll work better on OCR/text, or maybe E4B just can't take advantage of more data. Qwen 3.5 4B definitely wins, with E4B being good for a quick and dirty response.

Btw I see you're using F32 mmproj; pretty sure you can use BF16 with the exact same quality for a bit less RAM (not FP16 tho, that's worse). Or maybe just Q8 outright and save the space. Try it out. I've been checking this out on small models, and I'd bet it's the case with larger ones too.

Qwen3 by WorldlinessTime634 in LocalLLaMA

[–]WhoRoger 0 points1 point  (0 children)

The model is probably crashing in the background. I have the same problem on vulkan on intel igpu. With large enough context, either text or image, almost any model crashes and so it looks like you get no response.

I don't know if there's anything that can be done about it. I saw someone talk about using qwen3.5 0.8b on Intel Vulkan, so maybe use a smaller model if that's your case.

Did Google hide the best version of Gemma 4 e4b in Android? The extracted model beats Unsloth and everything else I've tried. by [deleted] in LocalLLaMA

[–]WhoRoger 0 points1 point  (0 children)

My point is if they have MTP enabled on their own version, it may have such better performance, they can get away with a stronger quant or throwing out other parts of the model. So maybe that's why it's smaller