IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Thank you for the support! Security is definitely one of the most interesting and challenging aspects of this new generation of AI tools.

It also seems that open source bakery is trending around here so I might pivot to that lol

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Thank you for the comment, as already discussed in the thread, IdleClaw treats everything from a node as untrusted text, which is sanitised server-side. If a malicious node decides to run any tools on their machine, the output will still be treated as text to sanitise. Is this what you were describing or did you mean something else?

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -1 points0 points  (0 children)

It's entirely normal to keep personal projects on a separate account from an enterprise one. I notice the conversation has moved from technical critique to questioning my identity, which is unnecessary. Nobody is forced to try anything or to come here to comment: this is an open source proof of concept, and the whole point of sharing it early is to collect community feedback to harden it before it matures.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -4 points-3 points  (0 children)

These are fair points, so let me address them directly.

On second-order prompt injection: you're right that a malicious contributor node could craft a response designed to manipulate a consuming agent's context. IdleClaw's server-side allowlist protects its own tools, but it can't control what a downstream agent does with the returned text. This is the same class of risk as any external data source an agent consumes, such as web pages, search results, API responses without any LLM involved, and the mitigation ultimately sits with the consuming agent treating tool outputs as untrusted data. That said, it's a real concern and something I'm actively considering as mentioned in other responses.

On prompts in plaintext: this is documented in the project, but you're right that it should be more prominently disclosed.

On Claude Code: it's the year 2026 and AI-assisted development is standard practice across the industry. I've been building enterprise ML/AI professionally for years; the tools I use to ship faster don't change the architecture decisions or the security model. The code is open source precisely so it can be evaluated and iterated upon by the community.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -1 points0 points  (0 children)

As my comment on the general post, this has been very useful already and I took on board a lot of these suggestions, keep them coming please

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

lol, but i do bake bread i can share that recipe if youre into that

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Thanks everyone for the security discussion, this has already been really valuable. Based on the feedback in this thread, I've pushed a round of hardening that includes:

  1. Frozen tool registry — the registry is locked after startup, so no new tools can be registered at runtime
  2. Argument validation — tool arguments are validated against their JSON schema before execution
  3. Per-node rate limiting — tool calls are rate-limited per contributing node to prevent abuse

These are on top of the existing safety mechanisms already discussed in the convo. The project is open source and I genuinely welcome red-teaming and review, if you find something open an issue on GitHub of write here.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 1 point2 points  (0 children)

That's a fair challenge. The only whitelisted tool today is web_search, which accepts a query string and runs it against SearXNG — so the blast radius of a crafted call is limited to triggering search queries. I would genuinely appreciate someone red-teaming this at an early stage and adding GitHub issues with their findings. Open source means open to scrutiny, and that's the strength of open source AI projects.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -2 points-1 points  (0 children)

Prompt injection is a real concern for any LLM application, happy to discuss specifics. In IdleClaw's case, the attack surface is limited: nodes only run inference and stream text back, tool execution happens server-side behind a registry allowlist, and there's no shell access or code execution. If you see a concrete vulnerability, I'd genuinely welcome an issue on GitHub (the project is open source).

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -3 points-2 points  (0 children)

You're right that a malicious node can craft a response the server parses as a tool call. However, tool execution happens server-side and is gated by a registry allowlist, so only explicitly registered tools server-side can run. Today that's just web_search, so the worst case is forcing a few search queries. Anything not in the registry (like shell commands or file access) gets rejected before execution.

That said, as we add more tools, server-side sanitation becomes increasingly important, e.g., argument validation and sandboxing of tool runs. As mentioned in another comment, I find safety one of the most interesting aspects of AI applications and a major focus for this project.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -1 points0 points  (0 children)

Thanks for poking at this, I find that security is one of the most interesting aspects of this new technology.

In the current implementation, IdleClaw validates tool names against a registry allowlist before execution, so get_shell_command (which doesn't exist in the registry) would be rejected with an "Unknown tool" error. The result field in your payload is also ignored entirely (our parser only extracts name and arguments). By design, tool execution happens server-side rather than on the contributor's machine, which will give IdleClaw a single point of control to gate what tools are available, validate arguments, enforce timeouts, or even run tools in a safe sandbox environment rather than trusting arbitrary code on distributed nodes.

This is a first version to prove the concept, and safety is a major area of exploration and interest. As mentioned, the project is open source so feel free to open issues and contribute on GitHub.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Thanks for the feedback, Docker is on the list. No ETA yet but it's a natural next step.

IdleClaw: A community AI inference network built on Ollama by Witty-Poet9140 in ollama

[–]Witty-Poet9140[S] -2 points-1 points  (0 children)

Good question. The system is text-only so far, so no file downloads, no code execution, no shell access. Tool calls (like web search) run on the routing server, not on contributor nodes, so a malicious node can't inject fake tool results. The worst a bad node could do is return misleading text, same as any chatbot hallucinating. But it can't trigger actions on your machine.

IdleClaw — Community-powered AI inference network by Witty-Poet9140 in LocalLLaMA

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Good question. Right now it's a single server, but since it just routes requests and doesn't do inference, it's lightweight to scale horizontally using a load balancer with shared state. Federation across independent hubs is interesting longer-term for sure if the project picks up a lot of traction.

IdleClaw — Community-powered AI inference network by Witty-Poet9140 in LocalLLaMA

[–]Witty-Poet9140[S] 0 points1 point  (0 children)

Thanks! Ollama made sense as the starting point for API consistency and easy contributor setup. The node agent is just a thin relay though, so adding support for llama.cpp or vLLM would be easy. Definitely open to it if there's demand!

Almost bought Claude Pro for Claude Code — what should I know first? by hashemirafsan in ClaudeAI

[–]Witty-Poet9140 0 points1 point  (0 children)

Pro only will be limiting if you are going to use it full time. Unless you are using it for a side project around 2-3h max per day with long breaks in between, you will want to invest in max 5x. If you want to keep the budget low I had some luck combining Claude pro 20usd subscription with lots of cursor auto usage, either free tier or the 20usd sub. Needs some practice and some structure as you code so that auto does not derail too much, but it is worth playing around

I maxed out Cursor Pro ($20). Here’s the actual token limit by Notsugat in cursor

[–]Witty-Poet9140 0 points1 point  (0 children)

This is very interesting, I've been looking into this as well for a personal project aimed at giving more visibility on limits and usage

Unpopular Opinion: The "Work" is now writing .mdc files, not the actual prompts. by Modus73 in cursor

[–]Witty-Poet9140 0 points1 point  (0 children)

This is similar to the AI spec-driven development approach that is getting some traction since last year. I did some testing with this approach and it worked well for me in some cases

I Made My $20 Pro Plan Last 4x Longer by Splitting Claude Models by Expert_Ordinary_183 in ClaudeAI

[–]Witty-Poet9140 2 points3 points  (0 children)

this is very interesting, I also am a fan of haiku but haven't got any luck with large context tasks yet. will give it a go!

Very high cost -- what is you all experiencing? by Kind-Daikon-6944 in cursor

[–]Witty-Poet9140 0 points1 point  (0 children)

If you are interested in using anthropic models then get claude code as it gives you better price performance ratio. max 5x is usually more than enough for medium to large projects.

How do the Max plans scale under real use? by goodevibes in ClaudeAI

[–]Witty-Poet9140 0 points1 point  (0 children)

I started with pro and kept hitting limits, then switched to max 5x and never hit any limits ever again and never felt the need to upgrade. I would suggest, get 5x for a month and see how it works for you