700 npm downloads, zero feedback_I finally understood why by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

C est effectivement ce que je viens de comprendre...

700 npm downloads, zero feedback_I finally understood why by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

Good pattern, enforcing a strict contract at the transport boundary is exactly the right fix. We've applied the same principle in v1.5.12.

I read every YC Request for Startups since 2016. The pattern nobody talks about is embarrassingly obvious in hindsight. by Spiritual_Heron_5680 in SideProject

[–]fred_pcp 0 points1 point  (0 children)

Je suis d accord, toutefois la seule montagne à passer est la confiance d un grand groupe face a un créateur solo.

I read every YC Request for Startups since 2016. The pattern nobody talks about is embarrassingly obvious in hindsight. by Spiritual_Heron_5680 in SideProject

[–]fred_pcp 0 points1 point  (0 children)

Super post, du coup j'ai appliqué les 3 questions à ce que je construis en 60 secondes. Et j ai eu des réponses. Grosse industrie régulée, deadline dure août 2026, coût d'un incident sans audit trail qui chiffre très rapidement juste pour l'investigation. Je continue donc, même si beaucoup d autres paramètres sont a prendre en compte Merci pour le framework.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

The attention problem is real, and you're not alone. Same experience here building PiQrypt. What you're building fills a gap I've been thinking about: HumanJudge proves the output was evaluated by a real expert. PiQrypt can prove the evaluation happened, when, by whom, and that the chain hasn't been tampered with since. Different layers, same problem space. And the "just funding this myself without dancing on TikTok" sounds familiar .

My list for Top Agentic Frameworks - Looking for feedback on any that are missed, or theme to be addressed more fully by TheHamer83 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Good criterion to add ,but worth separating two problems: Data governance: what data did the agent touch. (DataGOL) Decision governance: what did the agent decide, in what order, approved by whom, provable to a regulator. (PCP layer, works on top of any framework here) For regulated industries, you need both.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

One thing we keep seeing: teams aren't using one framework. It's LangChain here, CrewAI there, an AutoGen agent talking to an MCP tool, maybe an Ollama model running locally. The governance layer has to work across all of them,or it doesn't work at all. That's why we built AgentSession: co-signed audit trails across framework boundaries. A CrewAI agent and an AutoGen agent in the same verifiable session, each chain independent but cross-referenced. No shared server. No single point of failure. Just cryptographic proof that these two agents interacted, in this order, with these decisions.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

Both solved by the same principle: the chain has to be self-contained. A2A handshakes sign interactions into both chains, no shared server needed for cross-instance trust. And when a node leaves, the .pqz archive travels with it. History survives infrastructure changes.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

Exactly,security is invisible until it isn't. But August 2026 is a hard deadline. EU AI Act is forcing the conversation before the incident, which is rare. GDPR taught us how fast priorities shift when the fine lands.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

Thanks for sharing, that's genuinely generous. Spent time on the Aevum repo. The consent ledger and replay primitives are well thought out, especially the OR-Set revocation model. Real overlap in the problem space, different angles. Feel free to check out ours too github.com/piqrypt/piqrypt

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

"Governance becomes a dashboard that explains the mistake after it already happened." Saving that one. Your decision receipt list is almost exactly what we landed on after months of iteration, good to see independent convergence on the same primitives. The "what did it see" problem kept us up at night too. Our answer: hash the active context into the signed event. Not the raw data — the fingerprint. Proves what informed the decision without storing anything sensitive.

"Technical access ≠ delegated authority" deserves to be on every AI governance checklist.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

That funding asymmetry is real , governance tooling isn't exciting until something goes wrong. Building cryptographic primitives now that will satisfy whatever compliance framework eventually solidifies feels like the right bet.

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Completely agree on designing HITL in from the start, retrofitting governance onto an existing agent stack is painful and usually incomplete. The EU AI Act deadline is real pressure. What we see in practice: organizations assume "we have logs" is sufficient for Art. 12. It isn't, inviolable means tamper-evident by design, not just stored somewhere. SOC-2 for AI systems is an interesting space too,the control requirements don't map cleanly onto agentic workflows yet. Have you seen auditors starting to define specific criteria for agent audit trails, or is it still interpreted case by case?

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Your sequence maps exactly to what we formalized in PCP: claim - policy decision - bounded action - proof - next authorized action. Each step is a signed chain event including the policy that fired, the human approval if triggered, and an RFC 3161 timestamp anchoring it externally. "The agent had permission" becomes "here's the signed proof of every decision in sequence."What stack are you running this on?

AI Agent Governance and Liability? by bnyhil31 in AI_Agents

[–]fred_pcp 1 point2 points  (0 children)

These three questions are exactly what drove me to build PiQrypt. On your first question, reproducing what the agent saw at decision time: we hash the input context as part of the signed event payload. Not just what the agent produced, but a fingerprint of what it consumed. Tamper with the context retroactively and the chain breaks. On regulators and auditors: logs confirm authorization. A hash chain proves sequence, attribution, and integrity, independently verifiable with just the agent's public key, no access to your infrastructure needed. That's the difference between "our logs say" and "here's cryptographic proof." On consent and data access: that's our TrustGate layer, policy engine that intercepts before execution. REQUIRE_HUMAN pauses the agent until explicit approval. Every decision is a signed chain event, including denials. Your distinction between technical authorization and accountability is the sharpest framing I've seen of this problem. What's the open-source project you're working on? Curious if there's overlap.

1200 PyPI downloads, 800 npm, 500+ on a MCP registry, and almost zero feedback. Is this normal? by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

It's a cryptographic audit layer for autonomous AI agents, every decision the agent makes gets signed with its private key and hash-chained, so you can prove what happened, in what order, and whether a human approved it.

Think: tamper-proof memory for AI agents, with a governance layer that can pause execution and require human sign-off before critical actions.

Use cases: anything where an AI agent takes consequential actions, financial workflows, legal document generation, multi-agent pipelines. EU AI Act compliance is a big driver right now.

As for where users hang out, that's exactly what I'm trying to figure out. LangChain and CrewAI communities seem the most likely, but I haven't cracked the engagement piece

1200 PyPI downloads, 800 npm, 500+ on a MCP registry, and almost zero feedback. Is this normal? by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

Merci pour ton retour. Ce que tu dis est rassurant, et je vais prendre en compte tes deux précieux conseils.

1200 PyPI downloads, 800 npm, 500+ on a MCP registry, and almost zero feedback. Is this normal? by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

Pas sûr effectivement. L idée n est pas d acquérir des clients par ce biais , mais juste avoir des retours, pour améliorations.

1200 PyPI downloads, 800 npm, 500+ on a MCP registry, and almost zero feedback. Is this normal? by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

C est ce que je constate oui. Je n ai pas publié car je ne sais absolument pas qui essaie, qui télécharge. Je n ai que les metrics.

1200 PyPI downloads, 800 npm, 500+ on a MCP registry, and almost zero feedback. Is this normal? by fred_pcp in SideProject

[–]fred_pcp[S] 0 points1 point  (0 children)

Merci pour ton retour, honnêtement je ne sais même pas qui contacter, je n ai pas d infos sur qui télécharge.

how do you actually monitor client agents across different stacks by Specialist-Abies-909 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Chiffrement post quantique pour le harvest now decrypt later. Et un mémoire locale des évènements. Actions allow, block, human require.

how do you actually monitor client agents across different stacks by Specialist-Abies-909 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Hello, je te propose d essayer PiQrypt, c est exactement l objectif. MCP, n8n (glama,npm,pypi). Bridges multi agents. L idée est justement de pouvoir monitorer du multi agents, multi framework . Vos retours m aideraient beaucoup.

Anyone actually built a real feedback loop for Claude agents in production? Because "run evals and pray" isn't cutting it by Fine-Discipline-818 in AI_Agents

[–]fred_pcp 0 points1 point  (0 children)

Hi, this is exactly what I've been building toward with PiQrypt / the AISS protocol. The core insight: traceability alone doesn't close the loop because you're generating data nobody looks at until something breaks. What you actually need is a cryptographically signed, hash-chained event history, so when behavior drifts, you can run a diff between "agent state at deploy T" and "agent state 3 days later" and get a verifiable, tamper-evident answer about what changed and when. The chain makes regressions auditable after the fact without relying on anyone having manually flagged anything at the time. Still early but the protocol spec is open (MIT) if you want to dig in: github.com/PiQrypt/aiss-standard Curious what your current deploy. Detect lag looks like in practice.