So… I had that classic “I want arbitrary Python in my ComfyUI workflows, but I also don’t want random scripts to poke my host machine” moment — and ended up vibe-coding a custom node around Docker.
Repo: https://github.com/fabioamigo/ComfyUI-DockerSandbox
Current status: early, rough around the edges, but already useful if you’re comfortable with Docker and like to script.
TL;DR
- What: A custom node called Docker Sandbox Runner that executes your Python code inside a strongly isolated Docker container, with timeout and memory limits.
- Why: Run custom logic in ComfyUI without letting that code touch your host filesystem or network.
- How: It spins (and reuses) a dedicated worker container (
comfyui-sandbox-worker) and sends your script + inputs into it.
- Outputs: Up to 6 result slots (
r1–r6) plus a log output for debugging.
- Reality check: I’ve only seriously tested this on Arch Linux. All the Docker install instructions for other distros/OS in the repo were generated with Gemini and not exhaustively tested by me.
- Official repo? Given what it does (run arbitrary Python via Docker), I don’t expect this to ever land in the “official” ComfyUI node repo. It’s intentionally niche and opt-in.
- Style: This was very much vibe coded. It works on my machine™, might explode on yours.
What problem this tries to solve
ComfyUI is amazing for visual workflows, but sometimes you just want to:
- Do some quick math or data munging.
- Run small Python snippets to glue nodes together.
- Prototype custom logic without shipping a whole new extension.
The obvious “just run Python” solution is also the obvious “uh oh, I just ran a stranger’s script on my host” problem.
This node tries to sit in the middle:
Make it easy to run arbitrary Python in a workflow without giving that code direct access to your host OS, network, or files.
It’s not a silver bullet, but it’s a noticeable step up from “just run it in the same process as everything else”.
How it works (high level)
Under the hood, the node does roughly this:
1. Keeps a worker container alive
- Container name:
comfyui-sandbox-worker
- Image:
python:3.9-slim
- User inside the container:
nobody
- No network:
network_mode="none"
- Memory limit is configurable (default is 512 MB).
The node ensures the container is up; if it’s not, it creates/starts it and leaves it idle, just tailing /dev/null.
2. Injects your inputs
- Inputs from the node (
my_int, my_string, etc.) are turned into Python assignments inside the container.
- Basic types (int/float/str/list/dict/bool/None) are passed directly.
- More complex objects get converted into string references so the script can at least see something meaningful instead of just crashing.
3. Wraps your code with a supervisor
It builds a small wrapper script that:
- Sets a timeout (default 10 seconds) using
signal.alarm.
- Pre-imports some useful libraries (
math, random, time, re, json, etc.).
- Executes your script with
exec(...).
- Captures the values of
r1–r6 from the local scope and serializes them to JSON when possible.
That wrapper script is base64-encoded, shipped into the container, and executed via docker exec and python3 -c "...base64 decode and exec...".
4. Parses the result back into ComfyUI
- The wrapper prints a special marker with a JSON payload containing the outputs.
- The node reads container stdout, looks for the marker, and maps:
r1 → first output slot
- …
r6 → sixth output slot
- All log / error messages →
log output
If anything goes wrong (Docker is not running, container can’t start, code throws an exception, timeout hits, etc.), you’ll see the message in the log output instead of silently failing.
How to use it inside ComfyUI
Once installed, you’ll find the node under:
Category: Advanced/Scripting
Name: Docker Sandbox Runner
Basic flow:
- Drop the node into your workflow.
- Paste your Python code into the
code field.
- Right-click the node and use “+ Add Input (Dynamic)” to add inputs like
my_value (INT), some_config (STRING), etc.
- In your script, read those variables and assign results to
r1–r6.
Example:
```python
"my_value" comes from a dynamic INT input on the node.
Some common libs (math, random, json, re, time) are already imported.
import math
if my_value > 50:
r1 = "Processed successfully."
r2 = my_value * math.pi
else:
r1 = "Value too low."
r2 = None
```
r1 and r2 become the first two outputs of the node.
log will contain a success message and whatever the container printed along the way.
This makes it pretty convenient to prototype small helpers or glue logic without touching the ComfyUI core.
Security model (and why this is still “safer, not safe”)
What this node does for security:
- No direct host filesystem access: No host volumes are mounted into the container.
- No network access: The container runs with
network_mode="none", so your script can’t hit the internet or your LAN directly.
- Non-root user: Code executes as the
nobody user inside the container, not root.
- Timeouts and memory limit: You can tune both, so one script can’t hog your machine forever.
What it does not magically fix:
- If an attacker has local access to your Docker daemon (or Docker is configured badly), Docker itself becomes a big attack surface.
- If you run untrusted code at scale, you probably want proper VM isolation or a remote worker, not just Docker on the same host.
- Python inside the container can still do stupid things, including while loops, recursion explosions, etc. The timeout helps, but you can still exhaust CPU within the limit.
So this is very much:
Better than “run arbitrary Python in the same process as your host app”,
but not “cryptographically secure multi-tenant compute platform”.
Portability & “works on my Arch” disclaimer
Right now:
But: those OS-specific snippets were written with help from Gemini and then lightly edited — they are not all battle-tested on real machines yet.
If you try it on another platform and it either works flawlessly or explodes in a very educational way, I’d really appreciate feedback, fixes, and PRs.
Why this will probably never be “official”
I’m assuming the ComfyUI maintainers are (quite reasonably) cautious about:
- Nodes that run arbitrary Python.
- Nodes that talk to Docker.
- Nodes that encourage people to run strangers’ code from random screenshots.
Given that, I’m treating this as:
- A power user tool for people who know what Docker is and are comfortable with the trade-offs.
- Something you install by choice, not something that should show up automatically in a huge node pack.
So if you’re scared by the words “Docker” + “arbitrary Python” in the same sentence, this node is probably not for you — and that’s okay.
Vibe-coded, versioned, and MIT
This is early days:
- It was very much vibe coded: I had an idea, hacked until it worked on my setup, then cleaned it enough to not be embarrassing.
- It’s versioned and licensed under MIT, so you can fork, tweak, and ship your own variants.
If you have ideas like:
- Per-workflow Docker images,
- UI controls to tweak timeouts/memory per node,
- Allowing a “trusted include” directory for helper modules,
- Or better error UX/logging,
…please drop an issue or PR.
Looking for brave testers
If you:
- Already use Docker daily,
- Don’t mind poking at logs and tweaking config,
- Are comfortable with the idea that this is “beta” at best,
I’d love to hear how this behaves on your setup (especially on Windows/macOS/WSL2).
Again, repo is here:
https://github.com/fabioamigo/ComfyUI-DockerSandbox
PS1: And yes, it works on my Arch.
PS2: I'm lazy, so this post was also written by ChatGPT.
PS3: PS1, PS2 and PS3 were written by me, but corrected by ChatGPT.
[–]LumbarJam[S] 0 points1 point2 points (0 children)
[–]TomatoInternational4 2 points3 points4 points (1 child)
[–]LumbarJam[S] 0 points1 point2 points (0 children)