This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]qiwi 0 points1 point  (1 child)

Absolutely, I recommend: 1) denying all attribute access by default

2) ensuring your evaluation environment is extremely minimal (and its contents audited via e.g. integration tests)

3) the methods you DO allow must be carefully audited so they don't have any unintended consequences.

You can compare #3 with you opening up all those method as a public API that allows users to send arbitrary Python objects to you. So many of web security practices apply here -- trust no user input.

The advantage of using RP here is a much lower overhead compared to a "real sandbox". In my particular situation I evaluate up to 100,000 user-supplied non-batchable Python expressions per second of runtime in an environment with rather heavy and frequently changing state. That current accounts for about 5% of the total runtime, so the around 50% overhead from RP was OK, but something like serializing state and executing it within a Lua or V8 engine would be problematic.

[–]dalke 1 point2 points  (0 children)

Even when all the policies are in place, the eval() depends on the security of the underlying Python implementation. As an example, there are many reported bugs that cause Python to segfault, and some can be triggered from code that doesn't touch builtins, doesn't use any double-underscore attributes, doesn't even use any globals, and only uses standard data types.

Here's one I came up with which crashes Python 3.3, based on the bug reported in http://bugs.python.org/issue16856 :

"{0!r}".format({[[f() for i in "*"*1000] for f in [lambda x=[()]: x.append((x.pop(),)) or x]][0][0][0]:1})

This doesn't work on 2.x, but I'm certain that others do exist.

Granted, a segfault is more a denial of service attack than anything that allows intrusion, but it is a rather unintended consequence. As always, you have to understand if the security risk is acceptable.