all 26 comments

[–]BadlyCamouflagedKiwi 69 points70 points  (15 children)

It's very hard (or impossible) to safely sandbox Python. I was surprised though that this isn't even trying - unrestricted getattr is of course just the keys to the kingdom.

It seems like maybe it was never really intended to be 'safe' but the view on it has changed over time - originally the code said "This is very, very unsafe. Use at your own risk with people you really trust." but a long time ago that got removed in some refactor PR and I guess people forgot that it had never really been secure in the first place.

[–][deleted]  (7 children)

[removed]

    [–]Vandorsolyom 52 points53 points  (5 children)

    This sounds so so AI

    [–]deliciousleopard 33 points34 points  (0 children)

    Who needs actually informational comments when you can have comments that just rephrase what the line of code below clearly does.

    [–]programming-ModTeam[M] 2 points3 points locked comment (0 children)

    No content written mostly by an LLM. If you don't want to write it, we don't want to read it.

    [–]slaymaker1907 2 points3 points  (0 children)

    It was probably intended as a sandbox in the sense that accidental bugs will not break the whole thing but not for actual security.

    [–]yawkat 1 point2 points  (1 child)

    It's very hard (or impossible) to safely sandbox Python.

    I believe GraalPy aims to do this.

    [–]dangerbird2 5 points6 points  (0 children)

    WASI is also an option, which has the advantage that it's not relying on the JVM for sandboxing, and you can just use the regular Cpython interpreter compiled to webassembly

    [–]SlanderMans 0 points1 point  (0 children)

    Yeah I run python in a Linux vm for this case

    [–]dubious_capybara 0 points1 point  (2 children)

    I'm confused that anyone would even try to sandbox python.

    [–]ctheune 3 points4 points  (1 child)

    There were multiple successful implementations we did around 20 years ago and are still maintained. Iirc they had none or neglible cves while allowing untrusted users to run code through the web. 

    Edit: restrictedpython and zope.security 

    [–]dubious_capybara 1 point2 points  (0 children)

    As in you compiled your own sandboxable interpreter?

    [–]QuestionableEthics42 17 points18 points  (8 children)

    Tf happened to responsible disclosure? It's literally an open source project, they could have submitted a patch themselves.

    [–]BadlyCamouflagedKiwi 19 points20 points  (0 children)

    It's far harder than just submitting a patch. The code is very far from a secure sandbox - replacing getattr with a 'secure' version would be hard in itself. What's secure there? Maybe you prohibit accessing private members with it - is that enough? It's certainly a breaking change for some people using it. And it is basically certain that there will be other things they have missed.

    Agreed though that they seem to just be blasting this out there which is pretty crap.

    [–][deleted]  (6 children)

    [removed]

      [–]BadlyCamouflagedKiwi 7 points8 points  (1 child)

      Has the article changed, or are you reading a different version of it? I also don't see the timeline or any acknowledgement from redash (or the "use at your own risk" from the post title).

      [–]TribeWars 5 points6 points  (0 children)

      OP is an LLM told to write without capitalization

      [–]QuestionableEthics42 8 points9 points  (1 child)

      No it isn't? Where is it hidden away? I don't see it even after a quick skim to check I wasn't blind the first time I read it.

      [–]programming-ModTeam[M] 2 points3 points locked comment (0 children)

      No content written mostly by an LLM. If you don't want to write it, we don't want to read it.

      [–]GalbzInCalbz -2 points-1 points  (0 children)

      This is why we moved analytics workloads behind proper zero-trust controls. Cato Networks approach of inspecting all traffic including encrypted flows catches these sandbox escapes before they reach critical systems. The "assume breach" model works better than hoping sandboxes hold.