Job Arrays¶
A job array is a compact representation of many jobs called tasks. Tasks share the same job script, and have the same values for all attributes and resources, with the following exceptions:
- each task has a unique index,
$SLURM_ARRAY_TASK_ID
- job Identifiers of tasks only differ by their indices
- the state of tasks can differ
All tasks within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single sbatch
command and may be managed by squeue
, scancel
and scontrol
commands as a single job.
Shared Jobscript¶
All tasks in a job array use the very same single jobscript. Each task runs its own instance of the jobscript. The instances execute different work controlled by the $SLURM_ARRAY_TASK_ID
variable.
Example:
Assume we have 900 input files with the name of each beginning with "file" (e.g. file001, ..., file900). Assume we would like to use each of these input files with myprog.x program executable, each as a separate, single node job running 128 threats.
First, we create a tasklist
file, listing all tasks - all input files in our example:
$ find . -name 'file*' > tasklist
Then we create a jobscript:
#!/bin/bash
#SBATCH -p qcpu
#SBATCH -A SERVICE
#SBATCH --nodes 1 --ntasks-per-node 1 --cpus-per-task 128
#SBATCH -t 02:00:00
#SBATCH -o /dev/null
# change to scratch directory
SCRDIR=/scratch/project/$SLURM_JOB_ACCOUNT/$SLURM_JOB_USER/$SLURM_JOB_ID
mkdir -p $SCRDIR
cd $SCRDIR || exit
# get individual tasks from tasklist with index from PBS JOB ARRAY
TASK=$(sed -n "${SLURM_ARRAY_TASK_ID}p" $SLURM_SUBMIT_DIR/tasklist)
# copy input file and executable to scratch
cp $SLURM_SUBMIT_DIR/$TASK input
cp $SLURM_SUBMIT_DIR/myprog.x .
# execute the calculation
./myprog.x < input > output
# copy output file to submit directory
cp output $SLURM_SUBMIT_DIR/$TASK.out
In this example, the submit directory contains the 900 input files, the myprog.x executable,
and the jobscript file. As an input for each run, we take the filename of the input file from the created
tasklist file. We copy the input file to a scratch directory /scratch/project/$SLURM_JOB_ACCOUNT/$SLURM_JOB_USER/$SLURM_JOB_ID
,
execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out
name. The myprog.x executable runs on one node only and must use threads to run in parallel.
Be aware, that if the myprog.x is not multithreaded or multi-process (MPI), then all the jobs are run as single-thread programs, wasting node resources.
Submitting Job Array¶
To submit the job array, use the sbatch --array
command. The 900 jobs of the example above may be submitted like this:
$ sbatch -J JOBNAME --array 1-900 ./jobscript
In this example, we submit a job array of 900 tasks. Each task will run on one full node and is assumed to take less than 2 hours (note the #SBATCH directives in the beginning of the jobscript file, do not forget to set your valid PROJECT_ID and desired queue).
Managing Job Array¶
Check status of the job array using the squeue --me
command, alternatively squeue --me --array
.
$ squeue --me --long
JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON)
2499924_[1-900] qcpu myarray user PENDING 0:00 02:00:00 1 (Resources)
squeue
command.
$ squeue -j 2499924 --long
JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON)
2499924_1 qcpu myarray user PENDING 0:00 02:00:00 1 (Resources)
. . . . . . . . . . .
. . . . . . . . . . .
2499924_900 qcpu myarray user PENDING 0:00 02:00:00 1 (Resources)
Delete the entire job array. Running tasks will be killed, queueing tasks will be deleted.
$ scancel 2499924
For more information on job arrays, see the SLURM guide.