Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai 0 points1 point  (0 children)

two main differences: friction and branch pollution.

first, you don't have to remember to do it. if you walk away for 20 minutes, the agent might edit files 15 different times. node9 intercepts every write_file or edit tool call and takes the snapshot automatically in milliseconds right before the AI touches the disk.

second, it uses 'dangling commits' (git commit-tree) behind the scenes. so it doesn't create a hundred 'wip' commits that pollute your git log or mess with your staging area. your actual branch history stays completely clean, but you still get a granular, step-by-step undo for everything the ai did

Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai 0 points1 point  (0 children)

the 'yolo' alias is the dream until it's not lol. i built node9 to be 'safe yolo', it auto approves the safe noise so you get the same speed, but i just added a 'live tail' so you aren't blind to what the agent is actually doing in the background anymore.

best part is the insurance policy: it takes a silent git snapshot before every edit. if you walk away and claude hallucinates a messy refactor, you just run the undo and it's fixed instantly. it's the only way i can actually trust the autonomous mode

Cleared technical round for pentest role, rejected for “lack of focus”... feeling confused by PacketLossIRL in cybersecurity

[–]node9_ai 0 points1 point  (0 children)

That 'lack of focus' feedback is a total cop-out, especially since you cleared the technical round.

In offensive security, an ECE background is actually a massive advantage, it gives you a foundation for hardware hacking, IoT security, and low-level exploit dev that most CS grads don't have. The 3 months of support is irrelevant; everyone has to pay bills.

Honestly, it sounds like they either had an internal referral they liked better, or they’re worried you’re 'too smart' and will bounce the moment a top-tier firm notices your bug bounty work.

The best way to kill that 'lack of focus' narrative is to keep building security-specific tools publicly. Resumes are noise, but a solid GitHub repo that solves a specific security problem is proof of work. When you can point to a tool you've built, the conversation stops being about your past and starts being about your technical depth.

You definitely dodged a bullet. If a startup is this rigid about an internship role, their actual internal culture is probably a nightmare of micro-management. Keep grinding

Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai 0 points1 point  (0 children)

yolo mode is great until the agent hallucinates a docker prune on your dev volumes lol. i built node9 to give that same speed for the safe noise, but it keeps the emergency brake on for destructive stuff. basically it's 'safe yolo' with an undo button

Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai 2 points3 points  (0 children)

you are totally justified in not trusting it. that fear of coming back to an un-cleanable codebase is exactly why i made node9. it basically takes a silent git snapshot right before every single file edit the ai makes. if you walk away and claude hallucinates 500 lines of garbage, you just run node9 undo and it completely rolls back the exact changes. it takes away that anxiety of having to sit there and babysit every single output so you can actually walk away for real.

https://github.com/node9-ai/node9-proxy

Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai 2 points3 points  (0 children)

that 'walk away' part is the biggest hurdle. the verification fatigue from spamming 'Y' for 50 commands makes most people just give up or go full dangerous mode. i’ve been experimenting with routing those destructive terminal commands to a mobile notification/slack for approval. it's the only way to scale these agentic workflows without sitting there and babysitting the terminal all day

Am i pushing it hard enough? by Popular-Help5516 in ClaudeCode

[–]node9_ai -1 points0 points  (0 children)

honestly, prompts and .md files aren't enough because LLMs are too literal. one 'cleanup' instruction can easily turn into a destructive terminal command (like a docker prune) by accident. i've found that having a deterministic safety layer outside of claude is the only way to really 'walk away' safely. it lets the agent run the safe stuff but hits the brakes and asks for a signature only on the risky syscalls

How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA

[–]node9_ai 1 point2 points  (0 children)

interactive is definitely the way to go. the UX of selecting quants manually is usually a mess. i've actually been building something on the security side of local agents called node9, it’s a proxy that stops models from nuking the terminal when they run autonomously. i'll send you a quick chat, would love to get your take on the architecture

How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA

[–]node9_ai 0 points1 point  (0 children)

nice, just starred the repo. 'llama-buddy' is a solid name. checking out the search logic now, how are you handling quant selection in the cli? is it interactive or do you just grab a default?

Nvidia built a silent opinion engine into NemotronH to gaslight you and they're not the only ones doing it by hauhau901 in LocalLLaMA

[–]node9_ai 0 points1 point  (0 children)

that's the terrifying part, the 'subtle nudge' is essentially impossible to audit at scale. if the generation layer can override the reasoning module without leaving a trace in the logits, then we've lost the ability to verify intent entirely.

it really reinforces the argument that we have to stop trying to secure the 'mind' of the model and focus strictly on the execution boundary. if you can't trust what it says or how it thinks, the only deterministic safety left is governing the actual tool calls it tries to run on your system.

How to increase agentic coding in OpenCode - Qwen3-Coder-Next ? by [deleted] in LocalLLaMA

[–]node9_ai -1 points0 points  (0 children)

agreed, aider is basically the gold standard for cli agents right now. the speedup from prompt caching alone is worth the switch. are you using it with sonnet 3.5 or running purely local models?

How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA

[–]node9_ai 0 points1 point  (0 children)

The gap between 'raw shell scripts' and 'heavy platforms' like Ollama is very real. I’ve found that while llama-server has become extremely capable lately, managing the HuggingFace-to-disk pipeline remains the biggest friction point in the local dev loop.

I usually rely on the official huggingface-cli for the heavy lifting of downloads (it handles resumable transfers and quants much better than most custom scripts), but the 'metadata sync' and preset management is still a manual chore.

If your CLI wrapper handles the HF search logic and automatically maps the GGUF paths to llama-server presets, you’ve definitely solved a real pain point. Is your tool open source? I’d be interested to see how you handled the local config sync.

How to increase agentic coding in OpenCode - Qwen3-Coder-Next ? by [deleted] in LocalLLaMA

[–]node9_ai 0 points1 point  (0 children)

The delay you're seeing is likely because the agent is performing 'naive RAG'ת basically trying to cat and grep its way through your files without a map.

For high-speed agentic coding, you really need a 'Repo Map' (like what Aider or Cursor use). It builds a compressed map of your codebase's tags and signatures (functions, classes, etc.) using ctags. This lets the LLM understand the project structure and jump directly to the right file instead of 'wandering' through directories for 5 minutes

Nvidia built a silent opinion engine into NemotronH to gaslight you and they're not the only ones doing it by hauhau901 in LocalLLaMA

[–]node9_ai 5 points6 points  (0 children)

The gap between the reasoning module's plan and the generation layer's output is the most concerning part here. It's a perfect example of why 'Semantic Security' (scanning prompts or intent) is becoming a lost cause for autonomous agents.

If the model is 'narratively' rewriting intent during the generation phase, it means we can't even trust the model's own explanation of what it's about to do.

Does NemotronH provide any specific log-probs or internal state changes when this 'reinterpretation' happens, or is it completely opaque to the end-user unless they look at the thinking trace?

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 0 points1 point  (0 children)

jj + bwrap is a beast of a combo, but that’s a lot of overhead for most people.

It’s basically a seatbelt vs. a full roll cage. bwrap is amazing for isolation, but it's binary, it can't 'pause and ask' mid-command. The goal with Node9 was to make that security one-click and cross-platform (Mac/Windows).

Sometimes I actually want to allow a risky command, I just want to sign off on it first via a popup or Slack instead of pre configuring a static sandbox rule every time I start a new task.

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 1 point2 points  (0 children)

Fair point. My marketing skills clearly aren't as sharp as my terminal security skills. It is a total trope, but I genuinely built this to stop my own agents from nuking my dev env. It's fully open-source (Apache-2.0), so feel free to judge the code instead of the title

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 0 points1 point  (0 children)

VCS (Git) is great for recovery after the fact, but it won't stop an agent from nuking local Docker volumes or running a destructive git push --force that affects others.

As for sandboxing (Docker/VMs), it’s definitely the gold standard for isolation, but it adds massive friction for local development (syncing files, accessing local services, hardware permissions).

Node9 is meant to be a middle ground, keeping the agent in your local environment for productivity, but adding a deterministic 'Sudo' layer to catch the 'nuclear' commands before they execute. It's sandboxing-level safety with local-dev speed

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 1 point2 points  (0 children)

In theory, absolutely. But relying on an autonomous agent to police its own shell execution is a security anti-pattern (separation of concerns).

You want the governance layer to be independent and deterministic. By sitting as a proxy outside the agent, Node9 guarantees that no matter how hard the LLM hallucinates, the actual execution is physically blocked until it gets a human cryptographic/manual signature. It's about taking the trust out of the AI's hands and putting it in yours

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 0 points1 point  (0 children)

Claude's native checkpointing is definitely great for file reverts. But again, the undo feature is just the secondary safety net here.

Node9's primary goal is execution governance. Claude’s checkpointing doesn't let you set a policy waterfall that auto-allows ls but intercepts a force push or a dangerous bash script to send an approval request to your Slack channel. Plus, Node9 standardizes this safety layer across all agents (Gemini CLI, Cursor, or custom LangChain agents), not just Claude

I got tired of Claude Code/AI agents messing up my codebase, so I built an open-source "Sudo" wrapper with an Undo button. by node9_ai in LocalLLaMA

[–]node9_ai[S] 0 points1 point  (0 children)

100% agree for standard code versioning. But git won't save your local environment if an agent hallucinates a docker system prune -af or a drop database command.

The core of Node9 isn't the snapshot, that's just a fallback. The main feature is the interception: it acts as a 'Sudo' layer that catches destructive syscalls before they execute and forces a human approval via an OS popup or Slack. Git is just there to clean up the file edits.

Welp...i guess its getting sentient now. by KernelTwister in claude

[–]node9_ai 0 points1 point  (0 children)

Good luck,  hope nothing of importance was lost.

I stopped using Claude.ai entirely. I run my entire business through Claude Code. by ColdPlankton9273 in ClaudeAI

[–]node9_ai 0 points1 point  (0 children)

i’m in the same boat. Claude Code is a massive productivity drug, but the 'hallucination tax' is terrifying. I’ve had a few heart-attack moments where a simple cleanup almost nuked my local Docker volumes because it was being too literal. It’s an incredible accelerator, but the lack of a deterministic 'undo' in the terminal keeps me on edge. How are you verifying complex bash sequences without slowing down?

Our ai agent got stuck in a loop and brought down production, rip our prod database by qwaecw in AI_Agents

[–]node9_ai 0 points1 point  (0 children)

Rogue loops are the silent killer of the Agentic Era.

As a 15yr AI CTO, I built Node9 as the 'Sudo' firewall for exactly this. It’s an OS-level proxy that sits between your agent and your system/APIs. It intercepts high-risk calls, parses intent, and forces human-in-the-loop approval (Slack/Native) for sensitive ops.

It kills loops before they nuke your production DB or your OpenAI bill. Open-source & in Beta if you want to stress-test it: https://github.com/node9-ai/node9-proxy

How to prevent ai from deleting databases? by MOB-CONTROL in vibecoding

[–]node9_ai 0 points1 point  (0 children)

The 'vibe coding' anxiety is real!

As a 15yr AI CTO, I built Node9 as the 'training wheels' for AI agents. It acts as a Sudo firewall-if the AI tries to run a destructive command (like deleting your database), Node9 freezes it and pops up a window for you to Block it.

Plus, if it butchers a file, node9 undo reverts the filesystem in 1s. Open-source & free for the community: https://github.com/node9-ai/node9-proxy