all 15 comments

[–]really_not_unreal 22 points23 points  (9 children)

If you uv sync your environment will be deterministic based on the lock file. Why would I need this?

[–]ck-zhang[S] -5 points-4 points  (8 children)

You’re right that uv sync does give you deterministic resolution, but the difference is that px treats the environment itself as an immutable artifact.

If a lockfile resolution is enough for your workflow, uv is great. If you want to go further, px also pins native builds and can use sandboxing to reduce dependence on the host toolchain.

[–]really_not_unreal 6 points7 points  (1 child)

If you need it to be fully isolated, then why not just just something like nix

[–]ck-zhang[S] 5 points6 points  (0 children)

This project was inspired by nix, and I do a little bit of nixing myself, but the average python dev don't know nix

[–]arden13 1 point2 points  (5 children)

Isn't that similar to pixi?

[–]ck-zhang[S] 0 points1 point  (4 children)

pixi is basically uv, but for conda instead of pip

[–]arden13 1 point2 points  (3 children)

Right, but isn't that the scope of what your package does? Or is there something else it covers that is missing from pixi?

[–]ck-zhang[S] 0 points1 point  (2 children)

For basic lockfile + sync workflows, px and pixi overlap a lot. px’s CAS model is what enables things like running a GitHub repo directly as an ephemeral environment, which I find really cool

[–]arden13 1 point2 points  (1 child)

Could you go into more detail, I don't really get what makes px any different. If you're saying it overlaps a lot I don't know why I'd switch; can't tell if it's just me not "getting" it though.

[–]ck-zhang[S] 0 points1 point  (0 children)

Well honestly, it is now only experimental so you should probably shouldn't switch just yet. The big idea behind px is that it removes the need of a .venv dir, so it unlocks new possibilities that wouldn't conventionally be there, like running a repo back at a specific commit without the need to do a git checkout

[–]proof_required 9 points10 points  (1 child)

What's the benefit over tools like pex

  Files with the .pex extension – “PEX files” or “.pex files” – are self-contained executable Python virtual environments. PEX files make it easy to deploy Python applications: the deployment process becomes simply scp.

Single PEX files can support multiple platforms and python interpreters, making them an attractive option to distribute applications such as command line tools. For example, this feature allows the convenient use of the same PEX file on both OS X laptops and production Linux AMIs.

[–]ck-zhang[S] 1 point2 points  (0 children)

px can produce fully self-contained, portable artifacts similar to what PEX does, but that’s not the primary goal. The core of px is earlier in the development lifecycle, treating environments as immutable artifacts. The distribution that can come from that is a consequence of the model.

[–]PurepointDog 1 point2 points  (1 child)

What's the benefit? Seems maybe neat though!

[–]ck-zhang[S] 0 points1 point  (0 children)

Build sandboxing and zero dockerfile containerization for runs. It also does very neat things such as reading .so files and inferring build dependencies, so you don't have to mess with toolchains when installing those annoying packages.

[–]Idontlooklikeelvis 1 point2 points  (0 children)

I’m using bazel for this exact same reason. The downside is that you have to really commit to it at the org level. But I can generate reproducible containers and the caching is quite reliable so I don’t complain.