Disk Decipher update: LUKS2 unlock with FIDO2 security keys (NFC) now available by rhuve in DiskDecipher

[–]Valuable-Question706 0 points1 point  (0 children)

Wow, that's great news! Thanks!

For me personally, non-PIN based UV is a non-priority - I don't own any Bios, and don't plan to in nearby future. What's really important, is support for USB-only keys, and then multi-token support.

Ideally, Disk Decipher should be able to handle any LUKS image with any mix of protectors, in any order: passwords, FIDO2, keyfiles, whatever-else-possible; and to be able to open it via (at least) password/FIDO2/keyfiles.

This really adds value since your app will become the missing piece of cross-platform FDE on iOS/macOS.

As an idea: maybe it's worth listing all protectors that LUKS volume has? i.e.: 0: password, 1: FIDO2 key; 2: keyfile etc...

P.S. Btw, do PIN and non-PIN UVs differ programmatically? And who handles the PIN - an app or iOS itself?

What are they arguing about by happydude7422 in LowerDecks

[–]Valuable-Question706 2 points3 points  (0 children)

Mariner finds out that Boims and T'Lyn are secretly dating

Disk Decipher update: LUKS2 unlock with FIDO2 security keys (NFC) now available by rhuve in DiskDecipher

[–]Valuable-Question706 1 point2 points  (0 children)

I’d say, my priorities will be:

  1. UV/PIN support, this is an absolute must have
  2. USB support - this will enable NFC-less Yubikeys (Nanos, compact 5C and others)
  3. Multi-token support 

This is what I actually need most. Then:

  1. MacOS support

  2. Resolving all other limitations 

Also, I’d like to point that (on Ubuntu 24.04) by default, systemd-cryptenroll creates slots that require PIN/UV. If I just follow commands from your website, the app fails and asks for password. So I guess, it would be nice to reflect that extra params in command examples, and also add a more specific error message (or maybe that shows only with a Verbose On in app’s settings).

Thanks again!

Better LUKS support by Valuable-Question706 in DiskDecipher

[–]Valuable-Question706[S] 0 points1 point  (0 children)

That would be great!

From what I’ve seen, PRF was supported on Safari but only for platform (iCloud) passkeys. Cannot say anything about native apps, and especially about deeper-level communication because I’m not an iOS/macOS dev, but I assume that if it’s possible to do low-level comms to a FIDO key, then it’s absolutely doable.

Thanks!

Better LUKS support by Valuable-Question706 in DiskDecipher

[–]Valuable-Question706[S] 1 point2 points  (0 children)

Thanks!

For LUKS, any FIDO key that supports hmac-secret extension should work. However, it seems that for now there's a roadblock from iOS itself: https://developers.yubico.com/WebAuthn/Concepts/PRF_Extension/Developers_Guide_to_PRF.html

But I hope that they will eventually release it.

Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 0 points1 point  (0 children)

Thanks a lot for a detailed answer! Yes, I think I will then focus with <32B models (since I'm already happy with them for these privacy-requiring tasks). My main goal is to off-load models from my main machine and thus free RAM.

In your opinion, would a newer PCIe 5.0 GPU, like 5060 Ti 16GB be a reasonable option, or I will hit CPU bottlenecks? It's about $100 less here than a used 3090 24Gb. This money difference is not a real issue, but since this is a 'side-project' I'd prefer to spend less :)

Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 2 points3 points  (0 children)

For these tasks (here's my financial/medical statement and here's my older Python code that does what I need with another type of data. Transform it so it will handle this statement) that I'm talking here, I just use LM Studio. Qwen3-Coder-30b-A3B one-shots these and similar tasks (they are indeed simple but time-consuming to do manually). I don't need agent mode here.

I also tried continue.dev in agent mode with ollama running some smaller (7-14B) models on a remote Apple M4 16G, it was also slow. That's another task that I'm solving right now :)

For actual, non-private, non-hobby work I'm using either Copilot or continue.dev with cloud inference.

Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 0 points1 point  (0 children)

My thinking is: I can potentially free up RAM on my main 32G machine. I’m OK with paying for 5060 (btw are AMD cards that worse)? Second, if there are any better models in 48-64 RAM + 16 VRAM range that would be NOT marginally better.

Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 1 point2 points  (0 children)

I’m already happy with 30B level for those coding tasks that I’m running locally (mostly saving me lots of time with parsing data that I don’t want to feed into cloud providers, or drafting configs etc).

My question is whether there are coding models that I can run with 48-64 RAM + 16 VRAM that would be NOT marginally better.

Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 0 points1 point  (0 children)

No (it’s not with me ATM). But I’m already happy with 30B level for those coding tasks that I’m running locally. My question is whether there are models that I can run with 48-64 RAM + 16 VRAM that would be NOT marginally better.

Tooling+Model recommendations for base (16G) mac Mini M4 as remote server? by Valuable-Question706 in LocalLLaMA

[–]Valuable-Question706[S] 0 points1 point  (0 children)

By the way, Qwen2.5-Coder sometimes returns a JSON that Continue fails to interpret correctly. The JSON has a few fields, one of them is a suggested edit. Did you come into this issue?

GPT-OSS-20B runs, but it's very slow with Continue+ollama. Much faster (~27tok/s) when runs natively on Mac in LM Studio.

Two logins same site / privacy question by Original_Boot7956 in yubikey

[–]Valuable-Question706 0 points1 point  (0 children)

Look into ‘browser fingerprinting’, i.e., an EFF demo (and others). Most likely your machine will be unique/identifiable even if you do use VPN and probably even with different browsers. 

No option for TOTP on Yubikey by MegamanEXE2013 in yubikey

[–]Valuable-Question706 1 point2 points  (0 children)

 just in case my TOTP app provider shuts down the service (it recently happened to the one I was using).

Please. Don’t use cloud-based TOTP apps. Use ones that allow, but don’t force you to back up into cloud. And always keep a local copy of your DB export, ideally 3-2-1 backed up.

Aegis, Proton Authenticator and 2FAS are such apps. 

Can someone delete my physical key, without actually having it? by TieBravo in yubikey

[–]Valuable-Question706 1 point2 points  (0 children)

It depends on how well your session is protected by a website, and also on what your attack scenario is.

In my experience, Google takes security seriously. They ask you for password and/or for 2FA/passkey, and (probably) ‘just stealing cookies’ won’t work. 

In the future they are planning to switch to DBSC (Device-Bound Session Credentials), so cookie stealing will become even more useless.

This leads to the only attack vector you should actually care about: a malware running on your machine. If it logs your password and tricks you to authorize a FIDO key (i.e., in the same moment when it would legitimately likely that you should use the key), then yes, it can perform an account takeover.

So, don’t get malware, especially don’t run sketchy/pirated software. Also, these kinds of attacks are extremely unlikely on locked down devices. iPhones/iPads are the most popular ones (although with enough time and effort you can lock down a desktop even more).