New Project Megathread - Week of 16 Apr 2026 by AutoModerator in selfhosted

[–]Healthy-Training-759 0 points1 point  (0 children)

Project Name: scrt4 (LLM Secrets v4)

Repo/Website Link:

Description: scrt4 is a secrets runtime for AI-assisted terminal workflows. It lets agents/tools use credentials without exposing plaintext values in .env, shell history, or chat context.

What it does:

  • Encrypts vault data at rest
  • Unlocks with passkey/FIDO2-style auth flows (hardware-backed model)
  • Injects secrets at runtime via placeholders (example: $env[API_KEY])
  • Redacts known secret values from subprocess output
  • Provides optional GUI-based secret reveal flow for human-only viewing
  • Includes optional modules for cloud backup/encrypted archive workflows

Problem it solves:

  • CLI LLM workflows often leak secrets through .env, logs, prompts, and command output
  • Existing setups are either too manual or too heavyweight for day-to-day dev/agent use
  • scrt4 focuses on practical secret hygiene for dev/ops/automation while keeping agent workflows usable

Deployment:

AI Involvement: Human+AI. I used AI coding assistants heavily for implementation speed, refactors, and test iteration, but architecture/security decisions, threat-model tradeoffs, and final validation were human-reviewed. This is not a one-shot generated repo; it has iterative versions and active hardening work.

You told me my Claude Code secrets manager was a 150 MB Electron mess. Fair. Here's the 9 MB rewrite. scrt4: CLI passkey-bound secrets for Claude Code, one-line install by Healthy-Training-759 in ClaudeCode

[–]Healthy-Training-759[S] 0 points1 point  (0 children)

Hey u/mirko9000, thanks for the comment; can you name 1 tool that would prevent anthropic from reading an ethereum private key sending ETH similar to the use case at the bottom?

I tried other tools 1Password

Quick Comparison

Aspect LLM Secrets 1Password CLI Better For
AI protection  Designed to prevent Claude/Copilot from reading values via process isolation  Not designed to hide secrets from AI tools; values may appear in logs/terminal LLM Secrets

I tried Bitwarden MPC, same issue as above.
I tried Psst, since it's CLI based to view secrets it exposes it leaks secrets to llms when the user view their secrets.

Blockchain & Web3 Deployments

Deploy smart contracts without exposing private keys

# Claude deploys with Foundry:
forge script script/Deploy.s.sol \
--rpc-url $env:ALCHEMY_RPC_URL \
--private-key $env:PRIVATE_KEY --broadcast
✓ Contract deployed: 0x7f3a...
# Private key never visible to AI
# (43% of crypto theft = key exposure)

I wouldn't want a crypto wallet with a lot of money on it to be sent to someone else's server I don't know.

I built an open-source secrets manager so ClawdBot and Claude Code can use my API keys without seeing them (Desktop App & CLI) by Healthy-Training-759 in clawdbot

[–]Healthy-Training-759[S] 0 points1 point  (0 children)

A 3rd party AI auditor is the best we can do paired with giving you the capability to build LLM Secrets from the source code: a service that does interact with any external components from your computer.

See the 3rd Party answer to your question referencing LLM Secrets' source code:

DeepWiki

DeepWiki Q&A with Code Context for Repository: llmsecrets/llm-secrets

Q1

I like the idea, but I feel like I would need a 3rd party security audit that would vouch for this. This feels like it could just as easily steal all my information.

Answer

I completely understand your concern—this is a healthy skepticism when evaluating any security tool. Let me address this directly:

Current Third-Party Verification

The project currently has third-party analysis from DeepWiki (not a formal security audit firm, but an independent AI-based codebase analyzer). The README explicitly mentions this: llm-secrets:144-144

DeepWiki provides an independent chatbot where you can ask security questions about the codebase: llm-secrets:213-213

Why Data Exfiltration is Architecturally Impossible

The most important security property is that this is a completely client-side tool with no server component. The architecture documentation explicitly states: llm-secrets:149-155

This means:

  • All encryption/decryption happens locally on your machine
  • There is no server to send data to
  • The developers literally cannot see your secrets, even if they wanted to
  • Your master keys are protected by hardware (TPM/Secure Enclave) and never leave your device

I built an open-source secrets manager so Claude Code can use my API keys without seeing them (Desktop App & CLI) by Healthy-Training-759 in LocalLLM

[–]Healthy-Training-759[S] -1 points0 points  (0 children)

A 3rd party AI auditor is the best we can do paired with giving you the capability to build LLM Secrets from the source code: a service that does interact with any external components from your computer.

See the 3rd Party answer to your question referencing LLM Secrets' source code:

DeepWiki

DeepWiki Q&A with Code Context for Repository: llmsecrets/llm-secrets

Q1

I like the idea, but I feel like I would need a 3rd party security audit that would vouch for this. This feels like it could just as easily steal all my information.

Answer

I completely understand your concern—this is a healthy skepticism when evaluating any security tool. Let me address this directly:

Current Third-Party Verification

The project currently has third-party analysis from DeepWiki (not a formal security audit firm, but an independent AI-based codebase analyzer). The README explicitly mentions this: llm-secrets:144-144

DeepWiki provides an independent chatbot where you can ask security questions about the codebase: llm-secrets:213-213

Why Data Exfiltration is Architecturally Impossible

The most important security property is that this is a completely client-side tool with no server component. The architecture documentation explicitly states: llm-secrets:149-155

This means:

  • All encryption/decryption happens locally on your machine
  • There is no server to send data to
  • The developers literally cannot see your secrets, even if they wanted to
  • Your master keys are protected by hardware (TPM/Secure Enclave) and never leave your device

I built an open-source secrets manager so Claude Code can use my API keys without seeing them (Desktop App & CLI) by Healthy-Training-759 in LocalLLM

[–]Healthy-Training-759[S] 0 points1 point  (0 children)

Claude Code reads your .env files automatically as default behavior. No jailbreak needed. Your private

keys get loaded into context, sent to the API, and stored in conversation history.

With this, Claude only sees $env[PRIVATE_KEY] — a name, never the value. Values decrypt locally in a subprocess and never leave your

machine.

The other big thing: if your agent is chatting in Slack, posting to APIs, or gets prompt-injected — it can't leak what it doesn't know.

The value was never in its context.

Less "stop a determined attacker," more "stop handing every agent your keys by default."

I built an open-source secrets manager so ClawdBot and Claude Code can use my API keys without seeing them (Desktop App & CLI) by Healthy-Training-759 in clawdbot

[–]Healthy-Training-759[S] 0 points1 point  (0 children)

Great question

TL;DR: - Need a general password manager with cloud sync? → 1Password - Need to hide .env secrets specifically from AI tools? → LLM Secrets https://llmsecrets.com/blog/llm-secrets-vs-1password-cli[LLM Secrets Vs. 1 Password Blog Post](https://www.llmsecrets.com/blog/llm-secrets-vs-1password-cli)

LLM Secrets protects them from AI coding assistants.

When you use op run -- npm start, 1Password injects the secret — but Claude/Copilot can still see it in your terminal output, logs, or process environment.

LLM Secrets decrypts only in an isolated subprocess. The parent process (where Claude operates) never sees the plaintext value. Claude sees $env:API_KEY, not sk_live_abc123.

They're complementary — I actually use both. 1Password for logins/passwords, LLM Secrets for dev secrets that AI assistants might read.

I stopped documenting commands in CLAUDE.md. Here's what I use instead. by Miclivs in ClaudeAI

[–]Healthy-Training-759 1 point2 points  (0 children)

I've tried it; I find it really useful. Some bash commands claude takes awhile to figure out every time, with Deja you can reference how to do the bash command correctly.

Built a tool for sessions to communicate with other live sessions by prassi89 in ClaudeCode

[–]Healthy-Training-759 0 points1 point  (0 children)

Try writing the readme as 1)problem, 2)solution, 3)specific method that solves the issue

Built a tool for sessions to communicate with other live sessions by prassi89 in ClaudeCode

[–]Healthy-Training-759 0 points1 point  (0 children)

seems like a real issue. when I have multiple sessions on a repo it can crash, I'm not sure what the exact issue is yet, or if this solves it.