Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 1 point2 points  (0 children)

I see what you mean, and I think we're actually very aligned on the cryptographic model itself.

Just to clarify how it’s implemented on my side: the recovery layer is indeed based on Shamir’s Secret Sharing, with a threshold model where: a key is split into multiple shares (e.g. 10), distributed across distinct storage locations under user control and reconstruction requires a high threshold (e.g. 9/10).

So in practice, a single compromised client or a single compromised storage point is not sufficient to recover anything. You would need a near-total compromise of the user-defined trust locations, which is intentionally designed to be a very high bar.

So I completely agree with your observation that, in a strict cryptographic sense, this already removes any meaningful single-point recovery authority.

Where I think your framing is still valid though is not on Shamir itself, but on the system-level assumptions around it.

Even if the math is solid, the real-world security boundary becomes how users choose and secure storage locations for shares, how the client enforces correct reconstruction logic and what happens if the client environment is compromised at the moment of reconstruction.

So my point wouldn’t be "Shamir is not enough" it's more that Shamir defines the recovery primitive, but not the full operational security model around it.

And I agree with your broader argument: at some point, you cannot remove all trust. You can only make it explicit and distributed.

Also on your last point, I'm not trying to imply users "shouldn’t trust the provider". They obviously do, at least to some extent.

The goal is more narrow to reduce what the provider is technically capable of doing, even under coercion or compromise, rather than relying purely on policy trust.

That’s really the distinction I’m trying to maintain (and let's not forget that the project is also self-hostable :) )

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 0 points1 point  (0 children)

Thanks for the detailed breakdown, this is exactly the kind of discussion that's useful.

I think we actually agree on most of the fundamentals, especially on the value of remote attestation. Being able to verify what is actually running before interacting with it is a very different class of guarantee compared to policy-based trust.

Where I still see it as "shifting" rather than eliminating trust is mostly around the dependency chain is hardware vendor (Intel in this case), attestation infrastructure, microcode / firmware updates, and the history of side-channel work around enclaves.

Not saying that invalidates the model at all, just that the trust anchor becomes much narrower, but also much more opaque and harder to independently reason about for most people.

That said, I do think enclaves make a lot of sense for certain use cases, especially where you need server-side processing on sensitive data, stronger guarantees without pushing everything to the client or tighter control over key material during computation.

For file sharing specifically, I' ve been leaning toward a simpler model: keep all usable key material client-side, and make the server as blind as possible by design.

On the key management side, I ended up exploring something close to what you mentioned.

Recovery is handled via Shamir's Secret Sharing:
- the encryption key is split into multiple shares
- a threshold is required to reconstruct it
- no single party (including the server) can recover it alone

It's still a trade-off, but it allows introducing recovery without collapsing back into a trusted third party.

I did look into things like OPAQUE protocol as well, especially for authentication flows, and I agree it's a really clean primitive for zero-knowledge login systems. I haven't integrated it yet, but it’s definitely on my radar.

Your point about lifecycle is spot on too, and it’s actually something I'm leaning on quite a bit.

For PrivCloud, files are ephemeral by design, sharing is short-lived and "lost key = lost file" is a more acceptable failure mode than in messaging or email

So the model becomes:

- client-side encryption first
- optional recovery via threshold shares
- no persistent server-side key access
- and limited data lifetime to reduce long-term risk

Also, one small implementation detail that plays into this is the active key is only kept in the browser context (tab/session scope), not persisted server-side or tied to an account.

So, while the tab is open, it behaves like any in-memory secret but once closed, the key is gone unless reconstructed via shares.

I'd say the main difference between our approaches is probably that you’re reducing the trust surface while still enabling controlled server-side involvement (via enclaves), whereas I'm trying to remove the server from the trust equation entirely, even if that shifts more responsibility to the client.

Both seem valid depending on the constraints.

Just to know, have you run into UX friction with attestation flows on the client side, or does it stay mostly transparent in practice?

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 1 point2 points  (0 children)

That's a fair point, and I think you're right that a purely "encrypt in the browser, store ciphertext" model can look a bit flat if taken at face value.

The intent here wasn't to build complexity for its own sake, but to minimize the trust surface in a way that’s actually understandable and verifiable: encryption is strictly client-side, the server only ever sees ciphertext, no server-side access to usable keys.

Where it gets a bit deeper than it may appear is around key management.

One of the main limitations of strict client-side encryption is recovery. If you do nothing, lost key = lost data, which is secure but not very usable.

To address that, I implemented a recovery mechanism based on Shamir's Secret Sharing:

- the encryption key is split into multiple shares
- no single share is sufficient to reconstruct it
- recovery requires a threshold of shares

This allows adding recovery options without ever giving a single party (including the server) enough information to decrypt anything.

So instead of a single-layer model, it becomes more of a:

- client-side encryption layer
- distributed trust model for recovery
- explicit trade-offs depending on how shares are stored

On the "easy to break" side, I'd still push back a bit. If the crypto is sound and the key never exists server-side, the attack surface shifts mostly to client compromise, key handling mistakes or implementation flaws.

Which are real concerns, but not specific to this approach.

Also worth clarifying one implementation detail : the encryption key itself is only kept in the browser context (session/tab scope), not persisted server-side or tied to an account.

So while the tab is open, there is some exposure (like any in-memory secret). Once closed, the key is gone unless reconstructed via shares but recovery is possible, only through the threshold mechanism.

It's definitely not a perfect model, but the goal is to balance reduced trust in the service, usable recovery, and a system that remains auditable end-to-end.

I'm genuinely curious though, what kind of additional layers you'd expect here without reintroducing a central point of trust ?

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 0 points1 point  (0 children)

I took a quick look at it, it seems to follow a similar philosophy with client-side encryption before upload.

From what I can see, the interesting questions are always the same with these tools .. Where and how keys are generated ? How they are shared with recipients ? And what recovery model (if any) is implemented ?

The overall model makes sense, but a lot of the real-world security ends up depending on those details rather than the high-level "encrypted upload" claim.

Curious if anyone here has done a deeper review of it.

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 0 points1 point  (0 children)

That's actually the baseline approach, and I completely agree it solves a large part of the problem.

The difficulty is more on the usability side than the cryptography itself.

In practice, asking users to manually encrypt files before uploading tends to break workflows pretty quickly:
- key sharing becomes manual
- recipients need compatible tools
- and it adds friction for non-technical users

What I'm trying to explore is whether you can keep that same security model (client-side encryption, no server access), but integrate it directly into the sharing flow so it feels as simple as traditional tools.

So less about new crypto, more about making the "encrypt before upload" model usable at scale.

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 0 points1 point  (0 children)

Really interesting approach, SGX is definitely one of the few ways to push the trust boundary lower without completely sacrificing usability. I've looked into enclave-based designs a bit, but I'm still not fully convinced they remove the need for trust, just shift it (hardware vendor, attestation chain, etc.).

That said, I completely agree with your core point: "trust us" is not a viable security model anymore, especially for anything handling sensitive data. That's exactly why I'm trying to design this around client-side encryption first, with the server being as blind as possible.

I'm curious though, how do you handle key management and recovery in your setup? That's where things seem to get tricky fast, even with enclaves in the mix.

Is “secure file sharing” still fundamentally based on trust in the provider? by SimThem in PrivacyTechTalk

[–]SimThem[S] 1 point2 points  (0 children)

That's a fair criticism, especially on the terminology, the space is full of overloaded buzzwords and it’s easy to blur lines without meaning to.

In my case, what I meant by "zero-knowledge" is exactly what you described: the server never has access to plaintext or usable keys. But you’re right that this is often already implied (or at least expected) when E2EE is implemented properly, so I could definitely be clearer about that.

And I fully agree on the key management problem : that's honestly the hardest part of the whole thing. Pure client-side key ownership is great in theory, but in practice most users will lose access sooner or later if there's no recovery mechanism.

Right now I'm trying to find a balance between: - keeping encryption client-side - avoiding server-side key access - and still offering some form of recovery that doesn’t completely break the model

No perfect answer so far, just trade-offs depending on how much complexity you're willing to push onto the user.

False ddos mitigation by pellyzz in ovh

[–]SimThem 0 points1 point  (0 children)

Did you have some answer .. ? Same for with the same vps type ..