How to use and get the best out of slurm, slurm scripts and efficient use fo the cluster.
LINKS TO BE UPDATED STILL
Slurm
Jobs on eRI are submitted in the form of a batch script containing the code you want to run and a header of information needed by our job scheduler Slurm.
Creating a batch script
Create a new file and open it with nano myjob.sl, the following should be considered as required for a job to start.
#!/bin/bash -e #SBATCH --job-name=SerialJob # job name (shows up in the queue) #SBATCH --account=2024-mjb-sandbox # project to record usage against #SBATCH --time=00:01:00 # Walltime (days-HH:MM:SS) #SBATCH --mem=512MB # Memory in MB or GB pwd # Prints working directory
Copy in the above text and save and exit the text editor with 'ctrl + x'.
Note:#!/bin/bash
is expected by Slurm
Note: if you are a member of multiple accounts you should add the line #SBATCH --account=<projectcode>
Submitting
Jobs are submitted to the scheduler using:
sbatch myjob.sl
You should receive an output
Submitted batch job 1748836
sbatch
can take command line arguments similar to those used in the shell script through SBATCH pragmas
You can find more details on its use on the Slurm Documentation
Job Queue
The currently queued jobs can be checked using
squeue
You can filter to just your jobs by adding the flag
squeue -u <userid>@agresearch.co.nz squeue -u matt.bixley@agresearch.co.nz
You can also filter to just your jobs using
squeue --me
You can find more details on its use on the Slurm Documentation
You can check all jobs submitted by you in the past day using:
sacct
Or since a specified date using:
sacct -S YYYY-MM-DD
Each job will show as multiple lines, one line for the parent job and then additional lines for each job step.
Tips
sacct -X Only show parent processes.
sacct --state=PENDING/RUNNING/FAILED/CANCELLED/TIMEOUT Filter jobs by state.
You can find more details on its use on the Slurm Documentation
Interactive Jobs
You can create an interactive session on the compute nodes (CPUs, MEM, time) for testing code and resource usage. Rather than using the login node which can result in system slowdowna nd blockages
srun --cpus-per-task 2 --account 2024-mjb-sandbox --mem 6G -p compute --time 01:00:00 --pty bash
Job Efficiency
How did my job run, did I waste resources. The outcome of which is others users are potentially blocked and/or your priority lowers. seff <JOBID>
Low MEM efficiency example, 256GB requested for 3 days, but only used 25GB. 4 of these jobs would fill an entire node and use 128 of the 256 CPUs. If 30GB were requested, then 8 jobs could be run on the same node.
login-0 ~ $ seff 391751_28 Job ID: 394314 Array Job ID: 391751_28 Cluster: eri User/Group: bixleym@agresearch.co.nz/bixleym@agresearch.co.nz State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 32 CPU Utilized: 79-07:10:55 CPU Efficiency: 76.80% of 103-06:03:12 core-walltime Job Wall-clock time: 3-05:26:21 Memory Utilized: 25.34 GB Memory Efficiency: 9.90% of 256.00 GB
Additional Slurm Commands
A complete list of Slurm commands can be found here, or by entering man slurm into a terminal
sbatch |
| Submits the Slurm script submit.sl |
squeue |
| Displays entire queue. |
| Displays your queued jobs. | |
| Displays queued jobs on the compute partition. | |
sacct |
| Displays all the jobs run by you that day. |
| Displays all the jobs run by you since the 1st Jan 2024 | |
| Displays job 123456 | |
scancel |
| Cancels job 123456 |
| Cancels all your jobs. | |
sshare |
| Shows the Fair Share scores for all projects of which you are a member. |
sinfo |
| Shows the current state of the Slurm partitions. |
|
|
|
sbatch options
A complete list of sbatch options can be found here, or by running “man sbatch”
Options can be provided on the command line or in the batch file as an #SBATCH
directive. The option name and value can be separated using an '=' sign e.g. #SBATCH --account=nesi99999
or a space e.g. #SBATCH --account nesi99999
. But not both!
General options
--job-name |
| The name that will appear when using squeue or sacct |
--account |
| The account that usage will be recorded for. |
--time |
| Job max walltime |
--mem |
| Memory required per node. |
--partition |
| Specified job partition |
--output |
| Standard output file. |
--mail-user |
| Address to send mail notifications. |
--mail-type |
| Will send a mail notification at |
| Will send message at 80% walltime | |
--no-requeue |
| Will stop job being requeued in the case of node failure. |
Parallel options
--nodes |
| Will request tasks be run across 2 nodes. |
--ntasks |
| Will start 2 MPI tasks. |
--ntasks-per-node |
| Will start 1 task per requested node |
--cpus-per-task |
| Will request 10 logical CPUs per task. See Hyperthreading. |
--mem-per-cpu |
| Memory Per logical CPU.
|
--array |
| Will submit job 5 times each with a different |
|
| Will submit job 5 times each with a different |
|
| Will submit 1 though to 100 jobs but no more than 10 at once. |
Other
--qos |
| Adding this line gives your job a very high priority. Limited to one job at a time, max 15 minutes. |
--profile |
| Allows generation of a .h5 file containing job profile information. |
--dependency |
| Will only start after the job 123456789 has completed. |
--hint |
| Disables hyperthreading, be aware that this will significantly change how your job is defined. |
Tip
Many options have a short and long form e.g.
#SBATCH --job-name=MyJob
&#SBATCH -J=MyJob
.
echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully"
Tokens
These are predefined variables that can be used in sbatch directives such as the log file name.
| Job name |
| User name. |
| Job ID |
| Job array Index |
Environment variables
Common examples.
| Useful for naming output files that won't clash. |
| Name of the job. |
| The current index of your array job. |
| Useful as an input for multi-threaded functions. |
| Useful as an input for MPI functions. |
| Directory where |
Tip
In order to decrease the chance of a variable being misinterpreted you should use the syntax
${NAME_OF_VARIABLE}
and define in strings if possible. e.g.
echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully"
Job Output
When the job completes, or in some cases earlier, two files will be added to the directory in which you were working when you submitted the job:
slurm-[jobid].out
containing standard output.
slurm-[jobid].err
containing standard error.
Highlight important information in a panel like this one. To edit this panel's color or style, select one of the options in the menu.