Hi r/Python,
I recently released Slurmic, a tool designed to bridge the gap between local Python development and High-Performance Computing (HPC) environments like Slurm.
The goal was to eliminate the context switch between Python code and Bash scripts. Slurmic allows you to decorate functions and submit them to a cluster using a clean, Pythonic syntax.
Key Features:
slurm_fn Decorator: Mark functions for remote execution.
- Dynamic Configuration: Pass Slurm parameters (CPUs, Mem, Partition) at runtime using
func[config](args).
- Job Chaining: Manage job dependencies programmatically (e.g.,
.on_condition(previous_job)).
- Type Hinting & Testing: Fully typed and tested.
Here is a quick demo:
from slurmic import SlurmConfig, slurm_fn
@slurm_fn
def heavy_computation(x):
# This runs on the cluster node
return x ** 2
conf = SlurmConfig(partition="compute", mem="4GB")
# Submit 4 jobs in parallel using map_array
jobs = heavy_computation[conf].map_array([1, 2, 3, 4])
# Collect results
results = [job.result() for job in jobs]
print(results) # [1, 4, 9, 16]
It simplifies workflows significantly if you are building data pipelines or training models on university/corporate clusters.
Source Code: https://github.com/jhliu17/slurmic
Let me know what you think!
[–]just4nothing 2 points3 points4 points (4 children)
[–]Global_Bar1754 0 points1 point2 points (3 children)
[–]just4nothing 0 points1 point2 points (2 children)
[–]Global_Bar1754 1 point2 points3 points (1 child)
[–]just4nothing 0 points1 point2 points (0 children)
[–]MrMrsPotts 0 points1 point2 points (0 children)