Release Update - 07 March 2024
New and Improved
Valid Slurm account required due to enablement of Fairshare (see Slurm: Reference Guide )
Implement default settings for per job requested memory-per-CPU (3500MB/CPU)
Fixes
N/A
Explanation
Last week we implemented some configuration changes to the HPC cluster workload manager (Slurm), aiming to improve the eRI HPC experience and bring it closer to how other Slurm clusters that NeSI operates (e.g. Mahuika) work.
...
The enablement of fairshare also means it is now necessary to require that all jobs be associated to a Slurm account, i.e., #SBATCH --account=2024-mjb-sandbox
and for more information on slurm commands for eRI see Slurm: Reference Guide
...