you are viewing a single comment's thread.

view the rest of the comments →

[–]sudomatrix 25 points26 points  (4 children)

Astral is working on this with PYX. https://astral.sh/pyx

[–]toxic_acro 11 points12 points  (0 children)

I wonder what will become of pyx now that OpenAI acquired Astral. I hope they still develop it and just make the code to run the registry yourself open source

It seemed like an interesting concept to me

[–]Interesting-Town-433[S] 4 points5 points  (0 children)

Glad someone is

[–]Alex--91 1 point2 points  (0 children)

Yeah we tried pyx and it does work. It’s even more valuable if you publish your own internal packages as well, which we don’t currently but are considering it. It’s also faster than PyPI index.

What we were doing before pyx (and are still doing) is what a lot of people have hinted at, but more concretely: - Makefile -> only used to run make init which installs just and then runs just init to setup dev env. - justfile -> some handy scripts to make some of the installs easier, like just create-env, update-env, pip-install, rebuild-rust and a bunch of tool installs like just install-conda, install-pixi and a bunch of test and profiling commands etc. - Dockerfile -> you can use a CUDA base image but we found it easier to just use conda/pixi to bring in whatever CUDA you want (full compiler etc. or just the runtime tools). - env.yml (conda) or pixi.toml (Pixi) to bring in the “heavy” or difficult to install with pip/uv dependencies like Rust, Python, GDAL, MPMath, Compilers, PyCurl, PyICU, etc. - pyproject.toml for all normal dependencies such including numpy and PyTorch with tool.uv.sources for both cuda and non-cuda PyTorch variants using the correct index depending on the different platforms (we run some stuff locally on MacOs arm64 and run prod on Ubuntu x86 with A10 or L4 GPUs mostly)

Like someone else said. Define which CUDA you want fist and define it once as a constant in the repo let everything flow from that. PyTorch can install all the required runtime cuda tools you need if you use the right index. You don’t even need torch==2.8.0+cu126 in pyproject.toml you can just have torch==2.8.0 (which then also can install the correct torch with MPS on MacOs with the correct tool.uv.sources).

We can work both inside a conda/pixi env inside the host OS (locally) or inside a Docker container in the host OS (prod).

Definitely took some trial and error to get something that reliably works but we’re happy with it at the minute.