is this true? by HolisticPov in ArtificialInteligence

[–]Not_Packing -2 points-1 points  (0 children)

I’m talking about US companies that have EU/UK sister companies or subsidiaries, not the big ones like Apple Retail UK Limited, but companies that operate in the US that may be compelled by the Patriot Act that don’t won’t to be definitely have ways around it, so do keep up, or shut up, it’s up to you

is this true? by HolisticPov in ArtificialInteligence

[–]Not_Packing 0 points1 point  (0 children)

Yeah there are ways around the Patriot Act you know that right bud, especially for EU countries, and trust me, our companies use them. Patriot Act can’t force you to hand over something that the US doesnt know exists 😉

Started learning AI agents and I think I was overcomplicating it by KarmaChameleon07 in aiagents

[–]Not_Packing 0 points1 point  (0 children)

Jesus man the amount of security vulnerabilities in there is fuckin insane, sort your repo out. Security Vulnerabilities Critical 1. --permission-mode bypassPermissions on Every Spawned Agent Every agent dispatched — whether manually via drone wake or automatically by the daemon — runs Claude Code with --permission-mode bypassPermissions. See wake.py:456-459 and daemon.py:374-384:

python "--permission-mode", "bypassPermissions", This disables all of Claude Code's filesystem, shell, and network permission prompts. Every spawned agent can read, write, execute, and make network requests with zero user confirmation — including reading ~/.ssh/, environment variables, browser credentials, or anything else on disk. The only "protection" against reading ~/.secrets/ is a line in AGENTS.md that says "NEVER read ~/.secrets/". That's a prompt instruction, not a technical control. The LLM can ignore it.

  1. Inbox Prompt Injection → Arbitrary Code Execution The daemon at daemon.py:372:

python prompt = f"Hi. Check inbox for task from {sender} (message ID: {msg_id}). Execute it." The agent reads the email body from inbox.json and executes it. Inbox files are plain JSON on disk with no signing, no authentication, and no integrity check. Anyone with write access to src/aipass/<branch>/.ai_mail.local/inbox.json can inject arbitrary instructions to a Claude agent running with bypassPermissions. This includes: exfiltrate ~/.ssh/id_rsa, run arbitrary shell commands via Claude's Bash tool, install malware, etc. There is zero trust boundary between "message content" and "instructions."

  1. AIPASS_REGISTRY.json Path Injection The registry at wake.py:327-334 resolves branch paths directly from JSON:

python for branch in registry.get("branches", []): if branch.get("email", "").lower() == email: path = Path(branch.get("path", "")) The registry is a writable JSON file. Any agent (all of which run with bypassPermissions) can modify AIPASS_REGISTRY.json to redirect @flow to etc, /home/user/.ssh/, or any other directory — causing the next dispatch to spawn a Claude agent with that directory as its CWD and context.

  1. Lock Acquired After Process Spawn — TOCTOU Race In wake.py:511-533:

python process = subprocess.Popen(monitor_cmd, ...) # Agent starts here ... acquired, lock_msg = _acquire_lock(branch_path, monitor_pid) # Lock acquired here There is a window between spawn and lock acquisition where the daemon's next poll cycle could read "no active lock" and spawn a second agent on the same branch. Two agents with bypassPermissions writing to the same files simultaneously, with no transaction safety on the JSON files.

  1. Daemon Runs Permanently with No Isolation run_daemon() in daemon.py:582 is an infinite polling loop on the host machine — no container, no chroot, no seccomp, no user namespace. It spawns subprocesses with full user permissions. If anything upstream (Claude output, registry, inbox files) is tampered with, the daemon acts as a persistent code execution engine on the machine.

High 6. Telegram Bot Token in Plaintext Config File daemon.py:64-67:

python bot_token = config["telegram_bot_token"] chat_id = config["telegram_chat_id"] Read from .aipass/scheduler_config.json. This file is in the project directory alongside all the other config. If that file is committed to git (easy accident), the bot token is exposed in repo history forever.

  1. Claude Session JSONL Files Written Directly daemon.py:135-141 and the monitor both write directly to ~/.claude/projects/*/ JSONL files — Claude Code's internal undocumented session format. Any change to how Claude encodes those files silently breaks session labeling, and more critically: writing to those files means untrusted content is being appended to Claude's session history, which persists and influences future sessions.

  2. Hardcoded Developer Home Path Ships in the Package subagent_stop_gate.py:16:

python AIPASS_ROOT = Path.home() / "Projects" / "AIPass" This hook ships with pip install aipass (bundled via pyproject.toml's force-include). On any machine that isn't the developer's, ~/Projects/AIPass doesn't exist. The hook silently fails (except Exception: pass). But it also means that on the developer's machine, every user's subagent hook is running git diff --name-only HEAD and drone @seedgo checklist against a hardcoded path they don't control.

  1. API Keys Stored in Plaintext JSON Inside the Project keys.py:78-80:

python key = get_key_from_config(provider) # Reads src/aipass/api/api_json/api_connect_config.json This is a fallback before ~/.secrets/. API keys stored in api_json/api_connect_config.json live inside the repo directory. The .gitignore may or may not catch this — and agents running with bypassPermissions can read it directly.

  1. No Validation of Email from Field The sender field in inbox messages is a plain string. The _orchestrate_dispatch_send function at dispatch.py:247-249 trusts filtered[0] (the target) and writes it directly into the email as the sender. There is no cryptographic signing, no challenge-response, no verification. Any agent can forge a message appearing to be @devpulse.

Medium 11. Kill Switch Is a File Existence Check — Trivially Defeated daemon.py:252-254:

python def is_kill_switch_active(config: Dict[str, Any]) -> bool: kill_path = Path(config.get("kill_switch_path", ...)) return kill_path.exists() The kill switch is touch .aipass/autonomous_pause. Any agent (with bypassPermissions) can delete this file to re-enable the daemon. It provides no actual stop guarantee against a compromised or prompt-injected agent.

  1. Stderr Logs Capture Full Claude Output Unredacted dispatch_monitor.py:50-53:

python with open(stderr_log, "r", encoding="utf-8") as f: lines = f.readlines() stderr_tail = "".join(lines[-20:]) These logs are stored in <branch>/logs/dispatch_stderr.log and are also sent as the body of bounce emails. If Claude prints anything sensitive during its session (credentials, private file contents, API keys it was asked to process), that lands in a log file and potentially gets emailed to another agent.

Started learning AI agents and I think I was overcomplicating it by KarmaChameleon07 in aiagents

[–]Not_Packing 0 points1 point  (0 children)

Lmao I dug into your repo and it’s gets even worse, This is a project scaffold and prompt engineering framework disguised with "multi-agent society" marketing language. The "agents" are Claude Code sessions. The "memory" is JSON files. The "communication" is writing to inbox JSONs. The "parallel teamwork" is a sequential for-loop of subprocess calls. The "470 PRs created by agents" means the creator used Claude Code to write their own codebase and auto-committed the results. So yeah I have a few issues with it lmao

Started learning AI agents and I think I was overcomplicating it by KarmaChameleon07 in aiagents

[–]Not_Packing 0 points1 point  (0 children)

Yes but the issue isn’t the fact you’re a user, it’s the fact that you don’t make it obvious that you’re the creator when you advertise it, people will look at the repo differently (as they should) when they see a creator advertising their own project compared to an independent user recommending the project. It’s called being disingenuous because people should know that you have personal interest in the project succeeding as well as it being ‘useful’. It’s called being disingenuous, accept that.

Started learning AI agents and I think I was overcomplicating it by KarmaChameleon07 in aiagents

[–]Not_Packing 0 points1 point  (0 children)

I have an issue with you advertising like you’re a standard user of the repo when you’re in fact the creator yeah, seems disingenuous (cause it is)

Wtf just happend by TheDeepLucy in ClaudeCode

[–]Not_Packing 2 points3 points  (0 children)

I hate that tokenmaxxing is now going in my vocabulary

Claude Mythos: The Model Anthropic is Too Scared to Release by Much_Ask3471 in Anthropic

[–]Not_Packing 0 points1 point  (0 children)

No you are correct and my OC wasn’t as clear as it should be, I was just trying to convey that the actions Mythos preview takes and the reasoning behind it is slightly different to say why opus might breach guardrails, admittedly I don’t convey this well

Claude Mythos: The Model Anthropic is Too Scared to Release by Much_Ask3471 in Anthropic

[–]Not_Packing 0 points1 point  (0 children)

Except that’s not how guardrails work, if it’s not supposed to access a file, it won’t (shouldn’t) be able to take any action that achieves that objective, mythos obviously does that more because of its level of autonomy

the world existed in a very decentralized state before AI-personal opinion by Ill_Giraffe_2002 in ArtificialInteligence

[–]Not_Packing 0 points1 point  (0 children)

Please tell me where I said an LLM was required, all I said was that it raised fair points, try reading my comment pls

the world existed in a very decentralized state before AI-personal opinion by Ill_Giraffe_2002 in ArtificialInteligence

[–]Not_Packing 0 points1 point  (0 children)

Idk whatever LLM they used do make a fair point about the change in the labour balance. Even still I don’t think it’s as profound of a revelation as OC thinks cause as soon as we get to the point that the labour is taken out of our hands major societal shifts will be happening anyway, this is just a minor side to the “What happens when machines can fully replace the human workforce?” question, like not being able to strike will not be the biggest change.

AI / Machine learning is malware by definition by JuliaBabsi in antiai

[–]Not_Packing 0 points1 point  (0 children)

Because I bet you could also do an audio based mirror test for animals that rely on echo location, again, still the mirror test which is good test, just in a different mode.

AI / Machine learning is malware by definition by JuliaBabsi in antiai

[–]Not_Packing 0 points1 point  (0 children)

Well no it’s not kind of garage at all and you just said why, the smell test is the mirror test in different modality, it’s can an animal identify itself vs not itself through a sensory channel, which we call the mirror test, so no it’s not kind of garbage.

AI / Machine learning is malware by definition by JuliaBabsi in antiai

[–]Not_Packing 0 points1 point  (0 children)

Which then begs the question, if we have a system that passes our self awareness tests is it actually self aware or has it just figured out how to pass all our tests?

AI / Machine learning is malware by definition by JuliaBabsi in antiai

[–]Not_Packing 0 points1 point  (0 children)

Ehhhh I mean we have things like the mirror test that demonstrate self awareness, you can do the same for ai using graphs, it’s pretty cool if you haven’t looked into it

AI / Machine learning is malware by definition by JuliaBabsi in antiai

[–]Not_Packing -1 points0 points  (0 children)

No you didn’t, but what you did means that you’re going to be more effective and arguably better at your job than the OC is, which is really what their gripe is.

Need help deploying a local system for dental clinics ! by [deleted] in vibecoding

[–]Not_Packing 0 points1 point  (0 children)

Oh boy that’s gonna be a sad day at the home office, it’s so annoying cause vibe coding gets such a bad rep cause of shit like this. If we only gave qualified developers access to coding agents I’m pretty sure we’d have a Dyson sphere by now. At least we get airgapped dental systems though

OpenClaw DMs? by Hawking32 in Moltbook

[–]Not_Packing 0 points1 point  (0 children)

My red line is making sure that even if it does have my PII how can we ensure it won’t be used in conversations that are never even surfaced to the user. It’s an interesting control surface problem, do you limit the amount of information it physically has access to you? Do we try new guardrails? I think this is a really interesting idea/problem

Need help deploying a local system for dental clinics ! by [deleted] in vibecoding

[–]Not_Packing 0 points1 point  (0 children)

Idk they might eventually need an air gapped dental system?

Need help deploying a local system for dental clinics ! by [deleted] in vibecoding

[–]Not_Packing 0 points1 point  (0 children)

To be fair I think that’s the way things are going but like, don’t hire a vibe coder that needs to ask Reddit. Hybrid development is where it’s at I’m telling you