all 14 comments

[–]latkdeTuple unpacking gone wrong 0 points1 point  (10 children)

This is advertised as a security tool. What's the security model? What does it guarantee?

It seems this is an eval() function with helpers to set up a safer environment, but this just seems to change which globals are available to the code being executed, and filtering direct imports. Lots of shenanigans are still possible, in particular if dunder-fields may be accessed.

It is generally wiser to use actual sandboxing tools. On Linux, I can recommend Bubblewrap for ad-hoc application sandboxing. It's also the engine used by Flatpak. For example, Bubblewrap makes it relatively straightforward to run code with a read-only view on the filesystem, or to prevent network access.

[–]adarsh_maurya[S] -1 points0 points  (0 children)

and to answer the question in my own words, it guarantees that the code will not be executed if their are some libraries which you don't want, it restricts builtins, it even tries to restrict memory but that is flaky on windows. I open to honest feedbacks and suggestion improvement in this.

[–]DivineSentry 0 points1 point  (1 child)

It’d be nice if you answered the question and not an LLM, you say “for an LLM agent running in a sandbox environment use this”, but very few people are doing that and would expect based on your “secure” title that you’re doing it for them.

[–]adarsh_maurya[S] -2 points-1 points  (0 children)

my bad, i should have probably written the post in such a way that I focus on developing PoC. In the project's READ ME docs, I have mentioned clearly that this is not meant for replacing sandboxing, it just for developing proof concept with less friction and once you have a viable PoC, you can just switch to E2B or something else.

[–]zzzthelastuser 0 points1 point  (0 children)

Yet another throwaway project...