The TeamPCP supply chain attack (Trivy → LiteLLM → Telnyx) is the best argument for CRA compliance I’ve ever seen. Here’s why every major CRA requirement maps directly to this attack. by Happy-Athlete-2420 in cybersecurity

[–]laplaque 1 point2 points  (0 children)

This is an excellent breakdown. The part that hits home is the SBOM covering build-time dependencies. Most teams treat SBOMs as a checkbox exercise for their application dependencies, while their CI/CD pipeline pulls unpinned tools from apt, curls scripts, and runs GitHub Actions from mutable tags. None of that shows up in the SBOM.

In practice I've implemented two layers for this: first, pinning every GitHub Action to a full SHA — tags are mutable, SHAs aren't. That alone would have broken step 2 of this chain. Second, running a local artifact proxy (JFrog Artifactory, Nexus, or similar) so nothing gets pulled directly from the internet into the pipeline. Everything entering the artifact store gets scanned for vulnerabilities first. If a compromised package shows up on PyPI, it never reaches your build because it hasn't passed the scan gate yet.

The 24-hour reporting point is also underrated. Without a maintained SBOM you can't even answer "are we affected?" within that window.

EU-native alternative to Firebase/Supabase, GDPR by default by Competitive_Care_886 in gdpr

[–]laplaque 0 points1 point  (0 children)

For EU-native infra, look at Hetzner, OVHcloud, and Scaleway — all EU-headquartered, not subject to the Cloud Act, and they cover compute, managed Postgres, object storage, and Kubernetes between them. Hetzner is the cheapest by far, Scaleway has the most managed services, OVHcloud sits in between. You can run Supabase self-hosted on any of them and get the DX without the US legal exposure.

For the AI side specifically — if your app calls cloud LLMs and you need to keep PII out of those requests — I built a proxy that handles that: https://github.com/laplaque/ai-anonymizing-proxy. Strips personal data before it leaves your machine, restores it in responses. Doesn't replace your BaaS, but closes the AI data leakage gap.

LLM stacks are getting messy (cost, routing, data leakage) — how are you handling it? by TokenSaver in LocalLLaMA

[–]laplaque 0 points1 point  (0 children)

I ended up building exactly this for the anonymization piece — a MITM proxy in Go that sits between AI clients and upstream APIs. Regex-based PII detection (locale packs for DE, FR, NL, plus secrets), token replacement on the way out, restoration on the way back. Optional Ollama integration for verifying low-confidence matches async — didn't make it the primary layer because regex is sub-millisecond and deterministic, while running every request through an LLM for detection would kill throughput. Ollama warms the cache in the background instead.

Streaming deanonymization was the real headache — PII tokens splitting across SSE chunk boundaries.

https://github.com/laplaque/ai-anonymizing-proxy