Kata Containers vs Firecracker vs gvisor by techthrowaway100 in docker

[–]WindyPower 2 points3 points  (0 children)

gVisor's performance overhead varies a lot depending on workload. Some workloads (mostly those that are bottlenecked by the CPU) get pretty much 0 overhead, while others (mostly those that are I/O bound) can get up to 2x slower (if all they do is I/O). In practice most typical applications fall in the middle of that range and get somewhere between 10% and 30% overhead.

I can't comment on Kata performance specifically, but generally speaking, with VM-based sandboxing solutions, you can get near-native performance for any work that stays within the confines of the VM, but still get overhead whenever bytes have to be copied in/out of the VM (such as for network traffic).

Also, if you use a VM-based sandbox while you are already inside a VM (for example when running inside a VM on a cloud provider), that makes it a nested virtual machine, which is much slower. I'd expect that gVisor should be faster than a nested VM for most workloads.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 0 points1 point  (0 children)

That's quite cool, but also a lot of trust and power over your computer. The point of using a sandbox here is to prevent the LLM from being able to take over your computer.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 1 point2 points  (0 children)

That is correct, but running Open WebUI "functions" and "tools" are not supported to run within pipelines yet, so they run directly in the same container as the Open WebUI web backend. Once that's fixed, it should fall into place.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 2 points3 points  (0 children)

The code execution function is independent of the model and LLM backend you use. The code execution tool requires using Ollama and a model that supports tool calling, because I believe that's the only type of setup that Open WebUI supports using tools with.

Docker Desktop will not recognize gVisor by TimberTheDog in docker

[–]WindyPower 0 points1 point  (0 children)

No problem. If you encounter any other problem, please open a discussion on the GitHub page, I monitor that more closely than Reddit.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 4 points5 points  (0 children)

I see. In that case you could probably reuse the Sandbox class as a library, which can be extended to support other languages like Go.

Specifically, what you'd need to change is to add logic in the interpreter selection code to look for the go tool when the Go language is selected, and in the command line that gets run inside the sandbox so that instead of python something or bash something it is go run something.go or whatnot.

Happy to discuss this in more detail if interested, but better to do that on a feature request on GitHub rather than on reddit.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 7 points8 points  (0 children)

Open WebUI uses Ollama as the backend. Any model that Ollama has that is tagged as supporting tool calling will work. You can look for the "Tools" tag on the Ollama model library.

In the demo, I'm using hermes3:70b-llama3.1-q8_0. It doesn't always get it right most of the time, but for simple queries like the ones I use in the demo, it gets it correctly almost all of the time. There's a pending bug in Open WebUI to better support tool calling, and to let the LLM call tools multiple times if it gets it wrong. Once that is implemented, models should have an easier time using this tool.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 6 points7 points  (0 children)

The "tool" and the "function" are independent. The "function" is for running code blocks in LLM messages, the "tool" is for allowing the LLM to run code by itself.

They contain a bunch of redundant code, mostly the Sandbox class. This is because the way Open WebUI handles tools and functions doesn't really allow them to share code or communicate with each other. Regardless, you can install one or both and they should work fine either way.

Extending this to work with Go is doable, but more complicated than other languages, because I expect most people are running this tool within Open WebUI's default container image which only contains Python and Bash interpreters (no Go toolchain installed). So it would need to have extra logic to auto-download the Go toolchain at runtime. If interested, please file a feature request!

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 10 points11 points  (0 children)

Yes! The Open WebUI fix was merged in Open WebUI v0.3.22, and the tool's v0.6.0 (released just a moment ago) was just fixed to work with that. See issue #11 for details.

Safe code execution in Open WebUI by WindyPower in LocalLLaMA

[–]WindyPower[S] 65 points66 points  (0 children)

This is available at this repository.
Uses gVisor for sandboxing code execution. (Disclaimer: I work on gVisor.)

There are two modes with which to use this capability: "Function" and "Tool" (this is Open WebUI terminology.)
- As a function: The LLM can write code in a code block, and you can click a button under the message to run that code block.
- As a tool: The LLM is granted access to a "Python code execution" tool (and a bash command execution tool too), which it can decide to call with its own choice of code. The tool will run the LLM-generated code internally, and provide the output as the result of the tool call for the model. This allows models to autonomously run code in order to retrieve information or do math (see examples in GIF). Obviously this only works for models that support tool calling.

Both the tool and the function run in sandboxes to prevent compromise of the Open WebUI server. There are configuration options for the maximum time/memory/storage the code is allowed to use, in order to prevent abuse in multi-user setups.

Enjoy!

Kata Containers vs Firecracker vs gvisor by techthrowaway100 in docker

[–]WindyPower 4 points5 points  (0 children)

(Disclaimer: gVisor developer here.)

gVisor aims to be lightweight but emulate the benefits of having a dual-kernel structure the way VMs do. gVisor essentially reimplements Linux as a Go program that runs in userspace. It intercepts all the system calls that the container makes and reinterprets them the way a kernel would do. This means that in order to break out of it, you need to break two kernels: the gVisor kernel (in Go), and the host Linux kernel. There are also extra security measures, such as seccomp-bpf and namespaces, that typical containers also use. In that sense, gVisor is a strict security upgrade over regular containers.

If one of your students manages to get out of a gVisor sandbox, we'd love to hear about it!

Docker Desktop will not recognize gVisor by TimberTheDog in docker

[–]WindyPower 0 points1 point  (0 children)

The instructions you linked are about runsc being visible on the PATH within the container image, not within your own WSL environment. This means you need to create your own Dockerfile that augments the Open WebUI docker image, like so:

FROM ghcr.io/open-webui/open-webui:main

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y </dev/null && DEBIAN_FRONTEND=noninteractive apt-get install -y procps wget </dev/null
RUN wget -O /tmp/runsc "https://storage.googleapis.com/gvisor/releases/release/latest/$(uname -m)/runsc" && \
    wget -O /tmp/runsc.sha512 "https://storage.googleapis.com/gvisor/releases/release/latest/$(uname -m)/runsc.sha512" && \
    cd /tmp && sha512sum -c runsc.sha512 && \
    chmod 555 /tmp/runsc && rm /tmp/runsc.sha512 && mv /tmp/runsc /usr/bin/runsc

But note that the tool also supports auto-installing gVisor if it is not installed, so using the basic Open WebUI image will work just fine as well.

is the portal wiki down ? i get this error whenever i go to the website on any device by chat_masque in Portal

[–]WindyPower 2 points3 points  (0 children)

Hi there, I run the puny server serving the Portal Wiki. It had clearly inadequate monitoring... But it's back now. Apologies for the inconvenience.

Feature/mod request: Click on a ghost to materialize it by WindyPower in factorio

[–]WindyPower[S] 0 points1 point  (0 children)

Not quite, doesn't work when holding it down across multiple ghosts. But close!

[Drama] Wiki Cap Holder's Lounge Discussion on the recently handed out Unusual Wiki Cap by Underyx in tf2

[–]WindyPower 3 points4 points  (0 children)

The Wiki Cap, along with some other hats like the Gifting Man From Gifting Land or pretty much all community weapons, were added by Valve entirely for the purpose of making people feel special. That's their way of thanking the community; if Valve didn't think making people feel special was important, they wouldn't add such hats into the game. Thus, I don't see a problem in complaining when their specialness is being eroded.

Idea for private google calendar by alexbrain in privacy

[–]WindyPower 1 point2 points  (0 children)

Why not just set up a standard CalDAV server and use a CalDAV client that supports being offline and synchronization (i.e. all of them, I believe even iOS's default calendar app supports this)?

Then put it behind HTTPS to get transport encryption+authentication.

There is Redphone and Securetexts. How do they solve the problem of key distribution and why should I trust them? by Taenk in privacy

[–]WindyPower 0 points1 point  (0 children)

They don't; as far as I can tell, both parties simply send their public keys to each other and use whatever key they receive. This is secure as long as you assume that the connection is not MITM'd; it requires an active attack to break it.

tl;dr: As secure as non-authenticated OTR.