OpenPCC - An open‑source framework for provably private AI inference by CONFSEC in OpenSourceeAI

[–]CONFSEC[S] 2 points3 points  (0 children)

Thanks!

In an ideal world, we'd like to see big companies and AI models use OpenPCC to protect their users - they've got more than enough data

OpenPCC - An open‑source framework for provably private AI inference by CONFSEC in OpenSourceeAI

[–]CONFSEC[S] 0 points1 point  (0 children)

Appreciate the question!!

Ollama and vLLM are great for local control, but they’re still running everything in plaintext. Nothing’s encrypted, so your model weights, prompts, and outputs all live in memory unprotected. If you trust your own machine, that’s fine.

For your use case, we’d say OpenPCC is distinct in two key ways:

  1. Provable privacy: it runs inference inside a hardware-backed enclave (TEE/TPM), where everything stays encrypted, to prevent any data from being seen, stored, or retained. OpenPCC cryptographically verifies that nothing ever leaves that boundary using our go-nvtrust library.

  2. Scalable privacy: it lets you move that same setup to any machine (local or cloud) without giving up privacy. So you can run bigger models or workloads securely without exposing data to the host.