Stylesheet style.css not found, please contact the developer of "arctic" template.

Batch Jobs

  • <html>

Equeueing of scripts launched via: sbatch job_script </html>

  • <html>

Please DO NOT employ the parameter –ntasks-per-core= unless you really know what you do. Erroneous usage of this flag </br>(especially in combination with the parameter –cpus-per-task) can lead to catastrophic errors in your jobs! </html>

slurm_jobscript_header.sh
#!/bin/bash --login
 
###################################################################################################
# WARNING: Please adapt all relevant parameters so that they fit the requirements of your job(s). #
# Questions and Remarks welcome to Sebastian Kraus                                                #
###################################################################################################
 
# %x: job name; %j: job id; %N: node; %t: task id; %a: array id (and others)
# #SBATCH -o %x.%j.%N.out  # for debugging purposes: redirect SLURM's stdout (please see man 1 sbatch for explanation of replacement symbols)
# #SBATCH -e %x.%j.%N.err  # for debugging purposes: redirect SLURM's stderr (please see man 1 sbatch for explanation of replacement symbols)
 
# #SBATCH -D $PWD               # if needed change to current working directory
 
#SBATCH -J jobname              # job name
#SBATCH -n 1                    # total number of tasks/cores for your job
#SBATCH --hint=nomultithread     # IMPORTANT: hyper-threading is activated; switch off and attribute whole core to task
# #SBATCH --ntasks-per-node=1   # number of tasks per node
#SBATCH -N 1                    # number of nodes
# #SBATCH -p smp                # partition the job gets scheduled: standard (default), smp, gpu (uncomment, if you want your job to run on hosts in SMP partition)
#SBATCH --time=00:15:00         # job run (wall clock) time in HH:MM:SS
#SBATCH --mem=4GB               # amount of resident main memory PER NODE(!)
# #SBATCH --mem-per-cpu=1GB     # amount of resident main memory PER CORE(!) (set only, if needed)
 
# #SBATCH --gres=gpu:tesla:1    # GPU resources (only with gpu partition!)
 
# #SBATCH --mail-type=END       # if your want to receive notifications about job status (cf. man 1 sbatch)
# #SBATCH --mail-user=username
 
 
# and now your job definition ;-)
 
module add [your_modules]
 
[your_commands]
  • <html>

SLURM defines lots of environment variables starting with SLURM_ - e.g. SLURM_JOB_NUM_NODES, SLURM_JOB_ID, SLURM_NTASKS_PER_NODE - you are able to access during the job run (see also man 1 sbatch). </html>

  • <html>

Beware of the fact that SLURM_NTASKS_PER_NODE is allowed to constitute an array of regular expressions and can only easily be parsed in case of a fixed number of tasks per node! </html>


date of revision: 07-24-2019 © kraus

hpc/hpc_tubit/slurm_usage/batch_jobs.txt · Zuletzt geändert: 2020/02/26 10:56 von kraus
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0