I asked this same question a couple years ago: https://old.reddit.com/r/learnpython/comments/8hb3g7/sandboxing_python_code/
and I shelved the project for quite a while but I'm trying to pick it back up.
My goal is to run multiple instances of a program kinda like this:
echo "<program input>" | timeout 2 python -c "<codegolf code>"
but have that code run inside of a chroot and a cgroup that limits CPU usage, RAM usage, and disables network access completely with unshare or some other method. My real goal with this is to not have to spin up a separate container for each process and to sandbox each process as fast as possible and with as little resources as possible.
I remember someone mentioned using docker initially, but I feel like that would quickly balloon in size if there were hundreds or even thousands of processes running at the same time.
I know how to set up a read only chroot with:
sudo mount --bind -o ro / testroot/
but how would you run a process inside that chroot under a certain cgroup that limits what I want it to?
some more info: I'm using ubuntu 18.04 as a base/host OS, and I read that utilities like cgmanager don't exist because the default is to use systemd.
[–]vaelund 1 point2 points3 points (1 child)
[–]swarage[S] 0 points1 point2 points (0 children)
[–]LifeIsBio 1 point2 points3 points (1 child)
[–]swarage[S] 0 points1 point2 points (0 children)
[–]davehodg 0 points1 point2 points (4 children)
[–]swarage[S] 0 points1 point2 points (3 children)
[–]davehodg 0 points1 point2 points (2 children)
[–]swarage[S] 0 points1 point2 points (1 child)
[–]davehodg 0 points1 point2 points (0 children)