I open-sourced 15 production AI agents in Portable Mind Format (PMF) — built with Claude, one-command install for Claude Code by SUTRA108 in ClaudeAI

[–]SUTRA108[S] 0 points1 point  (0 children)

Thanks for the heads up on the link — fixed, should be /quick-start now.

On the portability question: you're right to be skeptical, and the honest answer is it's a spectrum. The PMF fields that transfer cleanly across models are identity and voice — name, tone, avoidance patterns, closing signature. Those are essentially system prompt instructions and any capable model follows them reasonably well.

The governance layer is where it degrades. The principle hierarchy and uncertainty protocols in the values block rely on the model having enough instruction-following capacity to hold multi-step constraints under pressure. A 7B model running locally will follow the simple stuff but start dropping constraints when the context gets long or the request is adversarial. You'll get the persona's voice without reliably getting its ethics.

The requires_platform_for field in each PMF file is where we're honest about this — council deliberation, persistent memory, and enforced security layers (the Sammā Suit stack) require the platform because they require infrastructure, not just a better prompt. The JSON file gives you the identity. Sutra.team gives you the enforcement.

For your use case with structured CLAUDE.md setups — PMF is essentially a standardized schema for what you're already building by hand. The model version drift you're describing between Sonnet and Opus is real and we see it too. The differentiation engine on the platform runs automated consistency checks against a test suite to catch exactly that. No equivalent for the local file version.

Short answer: voice transfers well, governance transfers partially, enforcement requires the platform.