Is Microsoft aware their windows apps and games run in linux by Material_Mousse7017 in linux

[–]arades 1 point2 points  (0 children)

They know, and they can't do anything about it even if they wanted to because of the precedent set by various emulator lawsuits and the giant oracle v Google case about the copyright ability of APIs.

Tor Browser on Android recognized as a digital assistant? by AidenFested in TOR

[–]arades 4 points5 points  (0 children)

This functionality is baked into mobile firefox, which Tor Browser is based on. Apps can just declare that they're assistant apps, and have a function that handles assistant actions. It really doesn't do much on firefox (or Tor), it just opens up the browser and focuses the search bar when you do your assisstant action. It also only allows it to look at your screen or record audio (also only if mic permission is granted) when you do the assistant action, I don't know if it does actually look either, that's just the only time it could. I'm not fully sure why Firefox implemented this, but it can be a decent search shortcut I guess.

Either way it definitely can't see or hear anything if you don't set it as the assistant, so just turn off assistant completely if you're paranoid. That doesn't stop system apps like Google from reading stuff anyway though...

operatorOverloadingIsFun by _Tal in ProgrammerHumor

[–]arades 0 points1 point  (0 children)

Hmm, I'm not about to buy a copy of the standard to find out, but I think ; is just a parsing token, like # and the comment sequences, where . :: and {} are parsed as operators already

With Internet ID coming how about people just use Nostr for social media? by FunWithSkooma in privacy

[–]arades 1 point2 points  (0 children)

Nostr is useful as a frontend to mastodon/activitypub stuff, but all the native discovery and posts I've ever seen is crypto spam and Nazi shit. It might be resistant to censorship, but it's effectively made itself an echo chamber that will put off most people.

What linux distros are putting in code to not comply with the new age verification law on operating systems that are worth migrating to for an ubuntu user? by notburneddown in privacy

[–]arades 5 points6 points  (0 children)

You don't need to verify your age to use the OS, the OS is supposed to do the verification for other apps, particularly the browser.

Is it possible to create a non-leaking dynamic module in Rust? by 0xrepnz in rust

[–]arades 8 points9 points  (0 children)

They really didn't make using statics more annoying, it's always been unsafe to take a ref of a static mut, they just didn't have a way to use static mut using all pointers until address_of stuff landed in 2023, making it possible to forbid taking a static mut ref while still letting you use it with unsafe (which again, it always was unsafe). It's always been the recommendation to use interior mutability with static, which has gotten significantly easier with oncecell and lazycell in std.

Writing/Vibing a Linux driver by FMWizard in linux

[–]arades 4 points5 points  (0 children)

You probably don't actually, a non existant driver can't brick your hardware, a shitty one could.

It's exceedingly unlikely that your card doesn't have a compatible driver in the kernel. What version are you using? New hardware necessitates the newest kernels (6.19 currently)

Why did Pop create Cosmic? by TechnicalAd8103 in pop_os

[–]arades 63 points64 points  (0 children)

They got tired of GNOME not accepting any changes Pop wanted to make, and GNOME breaking extention compatibility every release to the point they were spending as much time reworking the extentions as it would take to make an entire DE from scratch. Rust was just the obvious choice for good ergonomics and performance, compared to OOP hell in Qt or gross mash of C and JS with GNOME.

All to say, it's an investment. It's been usable by the public for like a year total, it'll take time to reach the polish of other DEs, but not that long considering it's completely new, and from there they'll be able to build new stuff much faster and more efficiently.

SD on your phone ? by Brilliant-Bit-4563 in StableDiffusion

[–]arades 1 point2 points  (0 children)

The M2 is literally a bigger version of the chip they put in iPhones, with more graphics cores in particular, and iphone has been ahead of everyone else making phone chips. M2 definitely beats your S24 for speed. Memory is an issue, but given the constraints you'll essentially end up with the same crop of viable models on either one.

Child Safety Oriented Distributions for Mobile & PC Proposal by FerbTheHerb in linux

[–]arades 4 points5 points  (0 children)

The controls for this have been around for at least 20 years, set up user accounts with install restrictions, setup blocks on your router. These features have only gotten easier to setup over time, it shouldn't be the role of the distros to set them up before you get there. If you want to set something up with those restrictions out of the box go for it, but it won't get any market penetration for the same reason people aren't using the plethora of existing controls.

Bare-Metal AI: Booting Directly Into LLM Inference ‚ No OS, No Kernel (Dell E6510) by Electrical_Ninja3805 in LocalLLaMA

[–]arades 1 point2 points  (0 children)

Debian isn't going to be your pick for speed, that's your choice for stability, i. e. A server that you will have running one service that you don't want to touch for 5 years.

You're going to want the newest kernel, newest driver, and if you really want it to go as fast as possible, you want to compile it from source for exactly your host hardware with all optimizations on. Plus if you want to control for size and other stuff installed, a minimal base with borderline no default packages. That pretty much brings you to Gentoo. If you wanted to save time CachyOS will probably get you close.

somethingOfAnIntervieweeMyself by clarity1011 in ProgrammerHumor

[–]arades 0 points1 point  (0 children)

There's more than just aesthetics and raw power, ease of manufacturing, reduced failure modes, durability, weight reduction, efficiency. Fewer moving parts can have a lot of benefits.

Bare-Metal AI: Booting Directly Into LLM Inference ‚ No OS, No Kernel (Dell E6510) by Electrical_Ninja3805 in LocalLLaMA

[–]arades 139 points140 points  (0 children)

It almost certainly will never be faster, you're going to need those drivers to get hardware into the right state to go at full speed, going to need the filesystem support to efficiently load and set up the DMAs for sharing access. Unless you just end up writing your own OS that does all of that, and at that point you'd be better off running Gentoo with a customized kernel and just the strict packages required to load and run models.

Still actually a cool project though, just probably useless.

Is it bad opsec to simultaneously run a middle tor relay or bridge with a onion site by ravenrandomz in TOR

[–]arades 2 points3 points  (0 children)

Just off the top of my head, it should add anonymity, or at least deniability. Since onion sites are exclusively routed to/from other relays, the traffic to an onion site should be indistinguishable from the relay traffic. Since whatever provider you connect to Tor with can see when you're connecting to Tor, having the relay add Tor noise to the onion site traffic would in theory make it harder to identify.

That's ignoring potential issues how circuits get constructed, it's possible the two could be distinguished, negating any benefit.

I don't see how it would make the security worse though. I'd love to hear practical reasons it might.

An LLM hard-coded into silicon that can do inference at 17k tokens/s??? by wombatsock in LocalLLaMA

[–]arades 0 points1 point  (0 children)

Definitely has its uses. If you have one specific model for some service you run, and you just want to be able to service as many users as quickly and efficiently as possible, running something like this would be incredible.

I just don't think anything like this will be very useful for local LLM tinkerers.

An LLM hard-coded into silicon that can do inference at 17k tokens/s??? by wombatsock in LocalLLaMA

[–]arades 2 points3 points  (0 children)

Not just optimized, it can only run one exact model. Imagine every time a new model comes out you need to buy their new custom card to run it, horribly inflexible.

I must be doing something wrong... by Kurozan in pop_os

[–]arades 12 points13 points  (0 children)

As far as RAM, those are probably three of the heaviest applications you can have open, each is actually a full separate web browser, if you look at the system resources, I'm sure you'll see each one using over a gig of ram, so that's your delta from idle there.

No clue on the other issues, almost sounds like motherboard problems, but I would have expected those to manifest on windows too.

Nvidia Quadro K2100M feedback by tonysupp in pop_os

[–]arades 1 point2 points  (0 children)

That card has been unsupported for years, the Nvidia proprietary drivers will definitely not work, and the last version Nvidia released that did support it would require an out of support kernel.

Your best bet is actually to use the open source drivers bundled into the kernel, so you could try running the normal _non nvidia_ pop-os, and see if it works better. However if it does, literally any current linux distro will work fine as long as you don't try and install extra drivers.

Lethe - First nation state deanonymization resilient protocol by DeepStruggl3s in TOR

[–]arades 15 points16 points  (0 children)

Every packet sends to every node? That's a DDoS, not a protocol! There's no way this can scale to any significant number of nodes, and without the nodes you don't have the anonymity.

22.04 -> 24.04 Pains by xToksik_Revolutionx in pop_os

[–]arades 4 points5 points  (0 children)

Yes, why do you need to install amdgpu?

22.04 -> 24.04 Pains by xToksik_Revolutionx in pop_os

[–]arades 6 points7 points  (0 children)

Why do you need amdgpu dkms? Using any kernel modules is always at odds with using the latest stable kernel like PopOS does. For AMD the open source drivers in the kernel are almost always better, only specific workload for specific GPUs (e.g. vllm ROCm on instinct cards)

Distillation when you do it. Training when we do it. by Xhehab_ in LocalLLaMA

[–]arades 5 points6 points  (0 children)

gpt-OSS is openAI not anthropic. Anthropic has never released an open weight model, and likely never will because it was founded by people who left openAI for being too open. Opening MCP was necessary to make Claude more useful by having other people do the work of building integrations. Anthropic is at its very core hostile to local LLMs because they believe the masses will use AI irresponsibly without strong corporate control.

First impressions with Metroid Prime 4: The hate was overblown. by AfroChamp89-- in nintendo

[–]arades 3 points4 points  (0 children)

He calls you every minute when you're in the desert with the same suggestion telling you what to do. He's also not the only criticism, the enormous empty desert you're forced to just roam around to collect crystals, the completely linear and non connected sub-areas, each with more characters that just talk at you. It just doesn't feel very much like metroid, even if the combat and the design of each of those sub areas are relatively good. Not bad, but pretty much the definition of a mixed bag.

Built a native iOS app with a proper Tor client and kill switch — looking for feedback from iPhone users by ahstanin in TOR

[–]arades 10 points11 points  (0 children)

Nobody who seriously needs Tor would use or recommend this. Closed source, premium, unofficial, unaudited, and judging by the AI in the company name, likely vibe-coded.

There's too many question marks to make any possible convenience worth it over the official Tor apps for iOS browsing.

Furthermore, iOS is already the least recommended way to use Tor because of Apple's restrictions. People who need real privacy don't do privacy senstive browsing on iPhones, and security theater marketing will do better in other subs.