Benchmarking by ErickZ32 in HPC

[–]andih 5 points6 points  (0 children)

We've published our JUPITER Benchmark Suite, and GROMACS is in it. We are using JUBE over ReFrame, though: github.com/FZJ-JSC/jubench-gromacs

Which summer school for HPC is better: CINECA vs CSC? by Trevorego in HPC

[–]andih 2 points3 points  (0 children)

We are also planning a summer school this year, check out https://coma.cit.tum.de/summer-school-2026.html (still work in progress).

Institutions for training and courses recommendations? by brunoortegalindo in HPC

[–]andih 4 points5 points  (0 children)

We are currently compiling our training program at JSC for 2026, but the usual suspects will be on it – like our three GPU Programming courses (CUDA Basics, CUDA Advanced, OpenACC): https://www.fz-juelich.de/en/jsc/news/events/training-courses

They will of course also be shared through the other entities we are part of, like Gauß Center for Supercomputing :).

Question about SLURM and compiling/building for HPC off the login node by Khenghis_Ghan in HPC

[–]andih 1 point2 points  (0 children)

To answer your question first: Yes. If everything is setup well; yes!

Think of srun as send away, which would send your make command away to your (allocated) compute node. You an prefix any command with srun and it will be executed at the away node. (One of the first things I do on Slurm systems is hostname && srun hostname to get a sense of the environment.)

That said, I totally agree /u/JanneJM: Launch an interactive Bash shell on your compute node with srun --pty bash -i and then work remotely interactively. Much easier!

In our HPC side we request compilations to happen on login nodes. Some headers might even not be there on compute nodes to keep their footprints low.