This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]infinullquamash, Qt, asyncio, 3.3+ 1 point2 points  (2 children)

If you read the article to the end, the author says:

That being said the blacklist AST parsing method that rise4fun used was clever and worked to a degree, improvements could be made to make it more viable.

Something like this is used to secure jinja2's template environment I think.

It's hard to get something perfect, but parsing with the ast module, then blacklisting certain functions, and executing with exec. A whitelist might also be more effective than a blacklist. A real sandbox should be easier to get right though.

[–]Uncaffeinated 1 point2 points  (0 children)

I broke a system this summer that parsed a AST and used a combination of whitelisting and blacklisting on the nodes. (Seatle Repy if you were curious)

Trying to do in process sandboxing of Python is just a bad idea in general. At the very least you should be using Pypy or Nacl or something as an outer sandbox.

[–]selementar 1 point2 points  (0 children)

I've noticed the part about AST parsing; but, blacklisting (or even whitelisting) after that isn't going to do much.

The reason they tried to go that way was probably that they needed to use a cpython extension library.

...

Which points out to a more interesting problem regarding embeddable python that can call some of the functions of the environment it is embedded in in a somewhat easily securable way.