Search code examples
slurmsnakemakehpc

How do I set up Snakemake on SLURM properly?


The admins blocked me temporarily because my jobs were running on the login node. I am not sure if I am setting up the SLURM profile and invoking Snakemake properly, as I am having a few problems other than this. I am not sure how to monitor jobs submitted through Snakemake on the cluster. Also I cannot configure the workflow to submit one job at a time.

The command I use is:

snakemake --use-conda --cores 40 -j1

And the profile config file reads:

cluster:
  mkdir -p logs/{rule} &&
  sbatch
    --partition={resources.partition}
    --qos={resources.qos}
    --cpus-per-task={threads}
    --mem={resources.mem_mb}
    --job-name=smk-{rule}-{wildcards}
    --output=logs/{rule}/{rule}-{wildcards}-%j.out
    --error=logs/{rule}/{rule}-{wildcards}-.%j.err
    --account=account
    --ntasks=1
    --nodes=1
    --time={resources.runtime}
    --parsable
default-resources:
  - partition=el7taskp
  - qos=sbatch
  - mem_mb=100000
  - tmpdir=/users/user/tmp
  - runtime=2880
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
# local-cores: 40
latency-wait: 60
jobs: 1
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True

Am I simply not using the SLURM profile and running snakemake on the login node regularly?


Solution

  • You need to actually include the profile in your snakemake call, i.e.

    snakemake --profile slurm
    

    See the documentation for details.

    If you don't then you are running snakemake on the login node.