A Brief Introduction to Snakemake

Eric C. Anderson

Conservation Genomics Workshop, Wednesday August 30, 2023

What the Heck is Snakemake?

  • A Python-based “Workflow Management System”
  • Allows you to define a complex (bioinformatic) workflow as a series of steps that involve input files and output files.
  • It identifies the dependencies between the steps and then runs all the steps needed to create a requested output file.
  • This greatly simplifies the orchestration of bioinformatics, and makes it much easier to find and re-run failed jobs.
  • Incredibly valuable for reproducible research:
    • Not just so others can reproduce your results
    • Also useful for you to quickly run your workflow on different clusters, etc.

That sounds pretty jargony!

  • Illustrate with an example
  • Hope that it piques the curiosity of some

Our Small Example: GATK Best Practices “Light”

flowchart TD
  A(fastq files from 3 samples: our raw data) --> B(Trim the reads: trimmomatic)
  B --> C(Map the reads to a reference genome: bwa mem)
  C --> D(Mark PCR and optical duplicates: MarkDuplicates)
  D --> E(Make gVCF files for each sample: HaplotypeCaller)
  E --> F(Load gVCFs into Genomic DB: GenomicsDBImport)
  F --> G(Create VCFs from Genomic DB: GenotypeGVCFs)

A mini data set that only takes about 1.5 minutes to run through the major steps of a GATK-like variant calling workflow

  • Heavily subsampled Chinook salmon reads.
  • Three paired-end fastqs, and data only from three or four chromosomes.
  • We will trim it, map it, mark duplicates, then make one gVCF file per individual.
  • Then, to save time, we call variants only on one chromosome: CM031202.1.

Setting up our workspaces

It should be simple to set this up. The code at right does the following:

  • cd to the Snakemake-Example data directory insdie your home directory
  • Activate the snakemake-example conda environment.
  • Test to make sure that you have snakemake on your PATH
# From the home directory of your ConGen server
cd ~/Snakemake-Example

conda activate snakemake-example

# To make sure it is working, print the help information
# for snakemake
snakemake --help

Initial Configuration of our work directory

  • We can use the Unix tree utility to see what the Snakemake-Example directory contains.
  • Within the Snakemake-Example directory, type tree at the command line. This shows:
    • A README.md with installation instructions
    • A Snakefile. Much more about that later.
    • A directory data with three pairs of FASTQ files
    • A directory env that has information to install necessary software with conda
    • A directory resources that has two subdirectories
      • adapters: info for trimming Illumina adapters
      • genome.fasta: a FASTA file with the reference genome
--% tree
.
├── README.md
├── Snakefile
├── data
│   ├── A_R1.fastq.gz
│   ├── A_R2.fastq.gz
│   ├── B_R1.fastq.gz
│   ├── B_R2.fastq.gz
│   ├── C_R1.fastq.gz
│   └── C_R2.fastq.gz
├── env
│   └── snakemake-example.yml
└── resources
    ├── adapters
    │   ├── NexteraPE-PE.fa
    │   ├── TruSeq2-PE.fa
    │   ├── TruSeq2-SE.fa
    │   ├── TruSeq3-PE-2.fa
    │   ├── TruSeq3-PE.fa
    │   └── TruSeq3-SE.fa
    └── genome.fasta

4 directories, 16 files

How would you tackle this in a Unix way?

Consider the first two “steps”

flowchart TD
  H(fastq files from 3 samples: our raw data) --> I(Trim the reads: trimmomatic)
  I --> J(Map the reads to a reference genome: bwa mem)

Some pseudo-shell code

# cycle over fastqs and do the trimming
for S in A B C; do
  trimmomatic data/${S}_R1.fastq.gz data/S{S}_R2.fastq.gz \
    trimmed/${S}_R1.fastq.gz trimmed/${S}_R1_unpaired.fastq.gz \
    trimmed/${S}_R2.fastq.gz trimmed/${S}_R2_unpaired.fastq.gz \
    other-arguments-etc...
done 


# cycle over trimmed fastqs and do the mapping
for S in A B C; do
  bwa mem resources/genome.fasta \
    trimmed/${S}_R1.fastq.gz trimmed/${S}_R2.fastq.gz
done

What are some issues here?

  1. Ah crap! I forgot to index genome.fasta!
  2. This does not run the jobs in parallel!

Possible solutions for #2?

You can get things done in parallel using SLURM’s sbatch (which you probably need to use anyway).

Going about doing this with SLURM (a sketch…)

Consider the first two “steps”

flowchart TD
  H(fastq files from 3 samples: our raw data) --> I(Trim the reads: trimmomatic)
  I --> J(Map the reads to a reference genome: bwa mem)

Some pseudo-shell code

# cycle over fastqs and dispatch each trimming job to SLURM
for S in A B C; do
  sbatch my-trim-script.sh $S
done 

# ONCE ALL THE TRIMMING IS DONE...
# cycle over trimmed fastqs and dispatch each mapping job to SLURM
for S in A B C; do
  sbatch my-map-script $S
done

What is not-so-great about this?

  1. I have to wait for all the jobs of each step to finish
  2. I have to explicitly start each “next” step.
  3. If some jobs of a step fail, it is a PITA to go back and figure out which ones failed.
  4. The dependence between the outputs of the trimming step and the mapping step are implicit based on file paths buried in the scripts, rather than explicit.

The Advantages of Snakemake

  • The dependence between input and output files is explicit
  • This lets snakemake identify every single job that must be run—and the order they must be run in—for the entire workflow (all the steps)
  • This maximizes the number of jobs that can be run at once.
  • The necessary steps are determined by starting from the ultimate outputs that are desired or requested…
  • …then working backward through the dependencies to identify which jobs must be run to eventually get the ultimate output.
  • This greatly simplifies the problem of re-running any jobs that might have failed for reasons “known only to the cluster.”

Snakemake is a program that interprets a set of rules stored in a Snakefile

Some explanations:

  • Rule blocks: the fundamental unit
  • Correspond to “steps” in the workflow
  • Keyword “rule” + name + colon
  • Indenting like Python/YAML
  • Typically includes sub-blocks of input, output, and shell
  • (Also params, log, benchmarks, conda, etc.)
Snakefile
SAMPLES = ['A', 'B', 'C']


rule genome_faidx:
  input:
    "resources/genome.fasta",
  output:
    "resources/genome.fasta.fai",
  log:
    "results/logs/genome_faidx.log",
  shell:
    "samtools faidx {input} 2> {log} "


rule genome_dict:
  input:
    "resources/genome.fasta",
  output:
    "resources/genome.dict",
  log:
    "results/logs/genome_dict.log",
  shell:
    "samtools dict {input} > {output} 2> {log} "


rule bwa_index:
  input:
    "resources/genome.fasta"
  output:
    multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa"),
  log:
    "results/logs/bwa_index/bwa_index.log"
  shell:
    "bwa index {input} 2> {log} "




rule trim_reads:
  input:
    r1="data/{sample}_R1.fastq.gz",
    r2="data/{sample}_R2.fastq.gz",
  output:
    r1="results/trimmed/{sample}_R1.fastq.gz",
    r1_unp="results/trimmed/{sample}_R1.unpaired.fastq.gz",
    r2="results/trimmed/{sample}_R2.fastq.gz",
    r2_unp="results/trimmed/{sample}_R2.unpaired.fastq.gz",
  log:
    "results/logs/trim_reads/{sample}.log"
  shell:
    " trimmomatic PE {input} {output} "
    "  ILLUMINACLIP:resources/adapters/TruSeq3-PE-2.fa:2:30:10  "
    "  LEADING:3  "
    "  TRAILING:3  "
    "  SLIDINGWINDOW:4:15  "
    "  MINLEN:36 2> {log} "




rule map_reads:
  input:
    r1="results/trimmed/{sample}_R1.fastq.gz",
    r2="results/trimmed/{sample}_R2.fastq.gz",
    genome="resources/genome.fasta",
    idx=multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa")
  output:
    "results/bam/{sample}.bam"
  log:
    "results/logs/map_reads/{sample}.log"
  params:
    RG="-R '@RG\\tID:{sample}\\tSM:{sample}\\tPL:ILLUMINA' "
  shell:
    " (bwa mem {params.RG} {input.genome} {input.r1} {input.r2} | "
    " samtools view -u | "
    " samtools sort - > {output}) 2> {log} "




rule mark_duplicates:
  input:
    "results/bam/{sample}.bam"
  output:
    bam="results/mkdup/{sample}.bam",
    bai="results/mkdup/{sample}.bai",
    metrics="results/qc/mkdup_metrics/{sample}.metrics"
  log:
    "results/logs/mark_duplicates/{sample}.log"
  shell:
    " gatk MarkDuplicates  "
    "  --CREATE_INDEX "
    "  -I {input} "
    "  -O {output.bam} "
    "  -M {output.metrics} 2> {log} "




rule make_gvcfs:
  input:
    bam="results/mkdup/{sample}.bam",
    bai="results/mkdup/{sample}.bai",
    ref="resources/genome.fasta",
    idx="resources/genome.dict",
    fai="resources/genome.fasta.fai"
  output:
    gvcf="results/gvcf/{sample}.g.vcf.gz",
    idx="results/gvcf/{sample}.g.vcf.gz.tbi",
  params:
    java_opts="-Xmx4g"
  log:
    "results/logs/make_gvcfs/{sample}.log"
  shell:
    " gatk --java-options \"{params.java_opts}\" HaplotypeCaller "
    " -R {input.ref} "
    " -I {input.bam} "
    " -O {output.gvcf} "
    " -L CM031202.1    "            # just one small-ish "chromosome" for speed
    " --native-pair-hmm-threads 1 " # this is just for this small example
    " -ERC GVCF > {log} 2> {log} "




rule import_genomics_db:
  input:
    gvcfs=expand("results/gvcf/{s}.g.vcf.gz", s=SAMPLES)
  output:
    gdb=directory("results/genomics_db/CM031202.1")
  log:
    "results/logs/import_genomics_db/log.txt"
  shell:
    " VS=$(for i in {input.gvcfs}; do echo -V $i; done); "  # make a string like -V file1 -V file2
    " gatk --java-options \"-Xmx4g\" GenomicsDBImport "
    "  $VS  "
    "  --genomicsdb-workspace-path {output.gdb} "
    "  -L  CM031202.1 2> {log} "




rule vcf_from_gdb:
  input:
    gdb="results/genomics_db/CM031202.1",
    ref="resources/genome.fasta",
    fai="resources/genome.fasta.fai",
    idx="resources/genome.dict",
  output:
    vcf="results/vcf/all.vcf"
  log:
    "results/logs/vcf_from_gdb/log.txt"
  shell:
    " gatk --java-options \"-Xmx4g\" GenotypeGVCFs "
    "  -R {input.ref}  "
    "  -V gendb://{input.gdb} "
    "  -O {output.vcf} 2> {log} "

A closer look at a simple rule

(Screen grab from Sublime Text which has great highlighting for Snakemake)

The rule:

  • Requires the input file resources/genome.fasta
  • Produces the output file resources/genome.dict
  • Writes to a log file in results/logs/genome_dict.log
  • Uses the shell code samtools dict {input} > {output} 2> {log} to get the job done
  • What are those purple bits? {input}, {output}, and {log}?! in the shell code?
  • That is the syntax snakemake uses to substitute the values in the output, input, or log blocks (or other blocks…) into the Unix shell command.
  • Big Note: Output and log information is not written automatically to the output file and log file, nor is input taken automatically from the input file—you have to dicate that behavior by what you write in the shell block!
  • Thus, when this rule runs, the shell command executed will be:
samtools dict resources/genome.fasta > resources/genome.dict 2> results/logs/genome_dict.log 

We “drive” Snakemake by requesting the creation of output files

These output files are sometimes referred to as “targets”

  • snakemake looks for and uses the Snakefile in the current working directory.
  • Option -n tells snakemake to do a “dry-run:” (Just say what you would do, but don’t do it!)
  • Option -p tells snakemake to print the shell commands of the rules.
  • Those two options can be combined: -np
  • And we request resources/genome.dict as a target by just putting it on the command line:
Paste this into your shell
snakemake -np resources/genome.dict
  • And the output you got from that should look like:
What the output should look like
Building DAG of jobs...
Job stats:
job            count    min threads    max threads
-----------  -------  -------------  -------------
genome_dict        1              1              1
total              1              1              1


[Fri Sep  2 09:34:41 2022]
rule genome_dict:
    input: resources/genome.fasta
    output: resources/genome.dict
    log: results/logs/genome_dict.log
    jobid: 0
    resources: tmpdir=/var/folders/xg/mz_qt7q54yv_hwzvhskwx2c00000gp/T

samtools dict resources/genome.fasta > resources/genome.dict 2> results/logs/genome_dict.log
Job stats:
job            count    min threads    max threads
-----------  -------  -------------  -------------
genome_dict        1              1              1
total              1              1              1

This was a dry-run (flag -n). The order of jobs does not reflect the order of execution.

Let’s actually run that!

  • Remove the -np option and add --cores 1 to tell snakemake to run the requested jobs on one core
Paste this into your shell
snakemake --cores 1 resources/genome.dict
  • The output you get looks like what you saw before, but in this case the requested output file has been created.
  • And a log capturing stderr (if any) was created:
Paste this into your shell to see all the files
tree .

The output shows those two new files that were created

Output should look like this:
.
├── README.md
├── Snakefile
├── data
│   ├── A_R1.fastq.gz
│   ├── A_R2.fastq.gz
│   ├── B_R1.fastq.gz
│   ├── B_R2.fastq.gz
│   ├── C_R1.fastq.gz
│   └── C_R2.fastq.gz
├── env
│   └── snakemake-example.yml
├── resources
│   ├── adapters
│   │   ├── NexteraPE-PE.fa
│   │   ├── TruSeq2-PE.fa
│   │   ├── TruSeq2-SE.fa
│   │   ├── TruSeq3-PE-2.fa
│   │   ├── TruSeq3-PE.fa
│   │   └── TruSeq3-SE.fa
│   ├── genome.dict           <--- THIS IS A NEW FILE!
│   └── genome.fasta
└── results
    └── logs
        └── genome_dict.log   <--- THIS IS A NEW FILE!

Once a target file is created or updated Snakemake knows it

  • If you request the file resources/genome.dict from Snakemake now, it tells you that the file is there and does not need updating.
Paste this into your shell
snakemake -np resources/genome.dict
  • Because resources/genome.dict already exists (and none of its dependencies have been updated since it was created) Snakemake tells you this:
Expected output from Snakemake
Building DAG of jobs...
Nothing to be done (all requested files are present and up to date).
  • This helps you to not remake output files that don’t need remaking!

Wildcards: How Snakemake manages replication

Wildcards allow running multiple instances of the same rule on different input files by simple pattern matching

  • If we request from Snakemake the file
    results/trimmed/A_R1.fastq.gz,
  • then, Snakemake recognizes that this matches the output of rule trim_reads with the wildcard {sample} replaced by A.
  • And Snakemake propagates the value A of the wildcard {sample} to the input block.
  • Thus Snakemake knows that to create
    results/trimmed/A_R1.fastq.gz
    it needs the input files:
    • data/A_R1.fastq.gz
    • data/A_R2.fastq.gz

Try requesting those trimmed fastq files

  • See what snakemake would do when you ask for results/trimmed/A_R1.fastq.gz.
Paste this into your shell
snakemake -np results/trimmed/A_R1.fastq.gz
  • Note that you can request files from more than one sample:
Paste this into your shell
snakemake -np results/trimmed/A_R1.fastq.gz results/trimmed/B_R1.fastq.gz results/trimmed/C_R1.fastq.gz  
  • Then, go ahead and run that last one, instructing Snakemake to use three cores
Paste this into your shell
snakemake --cores 3 results/trimmed/A_R1.fastq.gz results/trimmed/B_R1.fastq.gz results/trimmed/C_R1.fastq.gz  

Note that it will go ahead and start all those jobs independently, and concurrently, because they do not depend on one another. This is how Snakemake manages and maximizes parallelism.

Chains of file dependencies

  • If Snakemake does not find a required input file for a rule that provides a requested output, it searches through the outputs of all the other rules in the Snakefile to find a rule that might provide the required input file as one of its outputs.
  • It then schedules all the necessary rules to run.
  • This means that an entire workflow with thousands of jobs can be triggered by requesting a single output file.

Short Group Activity

  • Trace the rules needed if we request the file results/vcf/all.vcf.
Snakefile
SAMPLES = ['A', 'B', 'C']


rule genome_faidx:
  input:
    "resources/genome.fasta",
  output:
    "resources/genome.fasta.fai",
  log:
    "results/logs/genome_faidx.log",
  shell:
    "samtools faidx {input} 2> {log} "


rule genome_dict:
  input:
    "resources/genome.fasta",
  output:
    "resources/genome.dict",
  log:
    "results/logs/genome_dict.log",
  shell:
    "samtools dict {input} > {output} 2> {log} "


rule bwa_index:
  input:
    "resources/genome.fasta"
  output:
    multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa"),
  log:
    "results/logs/bwa_index/bwa_index.log"
  shell:
    "bwa index {input} 2> {log} "




rule trim_reads:
  input:
    r1="data/{sample}_R1.fastq.gz",
    r2="data/{sample}_R2.fastq.gz",
  output:
    r1="results/trimmed/{sample}_R1.fastq.gz",
    r1_unp="results/trimmed/{sample}_R1.unpaired.fastq.gz",
    r2="results/trimmed/{sample}_R2.fastq.gz",
    r2_unp="results/trimmed/{sample}_R2.unpaired.fastq.gz",
  log:
    "results/logs/trim_reads/{sample}.log"
  shell:
    " trimmomatic PE {input} {output} "
    "  ILLUMINACLIP:resources/adapters/TruSeq3-PE-2.fa:2:30:10  "
    "  LEADING:3  "
    "  TRAILING:3  "
    "  SLIDINGWINDOW:4:15  "
    "  MINLEN:36 2> {log} "




rule map_reads:
  input:
    r1="results/trimmed/{sample}_R1.fastq.gz",
    r2="results/trimmed/{sample}_R2.fastq.gz",
    genome="resources/genome.fasta",
    idx=multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa")
  output:
    "results/bam/{sample}.bam"
  log:
    "results/logs/map_reads/{sample}.log"
  params:
    RG="-R '@RG\\tID:{sample}\\tSM:{sample}\\tPL:ILLUMINA' "
  shell:
    " (bwa mem {params.RG} {input.genome} {input.r1} {input.r2} | "
    " samtools view -u | "
    " samtools sort - > {output}) 2> {log} "




rule mark_duplicates:
  input:
    "results/bam/{sample}.bam"
  output:
    bam="results/mkdup/{sample}.bam",
    bai="results/mkdup/{sample}.bai",
    metrics="results/qc/mkdup_metrics/{sample}.metrics"
  log:
    "results/logs/mark_duplicates/{sample}.log"
  shell:
    " gatk MarkDuplicates  "
    "  --CREATE_INDEX "
    "  -I {input} "
    "  -O {output.bam} "
    "  -M {output.metrics} 2> {log} "




rule make_gvcfs:
  input:
    bam="results/mkdup/{sample}.bam",
    bai="results/mkdup/{sample}.bai",
    ref="resources/genome.fasta",
    idx="resources/genome.dict",
    fai="resources/genome.fasta.fai"
  output:
    gvcf="results/gvcf/{sample}.g.vcf.gz",
    idx="results/gvcf/{sample}.g.vcf.gz.tbi",
  params:
    java_opts="-Xmx4g"
  log:
    "results/logs/make_gvcfs/{sample}.log"
  shell:
    " gatk --java-options \"{params.java_opts}\" HaplotypeCaller "
    " -R {input.ref} "
    " -I {input.bam} "
    " -O {output.gvcf} "
    " -L CM031202.1    "            # just one small-ish "chromosome" for speed
    " --native-pair-hmm-threads 1 " # this is just for this small example
    " -ERC GVCF > {log} 2> {log} "




rule import_genomics_db:
  input:
    gvcfs=expand("results/gvcf/{s}.g.vcf.gz", s=SAMPLES)
  output:
    gdb=directory("results/genomics_db/CM031202.1")
  log:
    "results/logs/import_genomics_db/log.txt"
  shell:
    " VS=$(for i in {input.gvcfs}; do echo -V $i; done); "  # make a string like -V file1 -V file2
    " gatk --java-options \"-Xmx4g\" GenomicsDBImport "
    "  $VS  "
    "  --genomicsdb-workspace-path {output.gdb} "
    "  -L  CM031202.1 2> {log} "




rule vcf_from_gdb:
  input:
    gdb="results/genomics_db/CM031202.1",
    ref="resources/genome.fasta",
    fai="resources/genome.fasta.fai",
    idx="resources/genome.dict",
  output:
    vcf="results/vcf/all.vcf"
  log:
    "results/logs/vcf_from_gdb/log.txt"
  shell:
    " gatk --java-options \"-Xmx4g\" GenotypeGVCFs "
    "  -R {input.ref}  "
    "  -V gendb://{input.gdb} "
    "  -O {output.vcf} 2> {log} "

Helpful notes:

  • expand("results/gvcf/{s}.g.vcf.gz", s=SAMPLES) is a list of files:
[results/gvcf/A.g.vcf.gz, results/gvcf/B.g.vcf.gz, results/gvcf/C.g.vcf.gz]
  • multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa") is a list of files:
[resources/genome.fasta.amb, resources/genome.fasta.ann, resources/genome.fasta.bwt, resources/genome.fasta.pac, resources/genome.fasta.sa]

Let’s request results/vcf/all.vcf from Snakemake

  • Let’s start with a dry run:
Paste this into your shell
snakemake -np results/vcf/all.vcf  
  • After we look at that, and discuss, let’s actually run it, using 2 cores:
Paste this into your shell
snakemake -p --cores 2 results/vcf/all.vcf  

That should take a minute or two.

  • If you try to run the workflow again, Snakemake tells you that you do not need to, because everything is up to date: Try running the above line again:
Paste this into your shell
snakemake -p --cores 2 results/vcf/all.vcf  

If any inputs change, Snakemake will re-run the rules that depend on the new input

  • Imagine that the sequencing center calls us to say that there has been a terrible mistake and they are sending you new (and correct) versions of data for sample C: C_R1.fastq.gz and C_R2.fastq.gz
  • Snakemake uses file modification dates to check if any inputs have been updated after target outputs have been created.
  • So we can simulate new fastq files for sample C by using the touch command to update the fastq file modification dates:
Paste this into your shell
touch data/C_R1.fastq.gz data/C_R2.fastq.gz
  • Now, when we run Snakemake again, it tells us we have to run more jobs, but only the ones that depend on data from sample C. Do a dry run to check that:
Paste this into your shell
snakemake -np results/vcf/all.vcf
  • Check that it will not re-run the trimming, mapping, and gvcf-making steps for samples A and B, which are already done.

Snakemake makes it very easy to re-run failed jobs

  • Clusters and computers fail (sometimes for no apparent reason) occasionally
  • If this happens in a large, traditionally managed (Unix script) workflow, finding and re-running the failures can be hard.
  • Example: 7 birds out of 192 fail on HaplotypeCaller because those jobs got sent to nodes without AVX acceleration.
  • Five years ago, setting up custom scripts to re-run just those 7 birds could cost me an hour—about as much time as it takes me now to set up an entire workflow with Snakemake.
  • On the next slide we are going to create a job failure to see how easy it is to re-run jobs that failed with Snakemake.

Simulating a job failure as an example

  • First, let’s remove the entire results directory, so that we have to re-run most of our workflow.
Paste this into your shell
rm -rf results
  • Now, we are going to corrupt the read-2 fastq file for sample A (but keeping a copy of the original)
Paste this into your shell
cp data/A_R2.fastq.gz data/A_R2.fastq.gz-ORIG
echo "GARBAGE_DATA" | gzip -c > data/A_R2.fastq.gz
  • Now, do a dry-run, requesting results/vcf/all.vcf
Paste this into your shell
snakemake -np results/vcf/all.vcf

The output ends telling us that 14 jobs will be run:

End of the expected output
Job stats:
job                   count    min threads    max threads
------------------  -------  -------------  -------------
import_genomics_db        1              1              1
make_gvcfs                3              1              1
map_reads                 3              1              1
mark_duplicates           3              1              1
trim_reads                3              1              1
vcf_from_gdb              1              1              1
total                    14              1              1
  • Now, run it with 2 cores and give it the --keep-going command which means that even if an error occurs on one job, all the other jobs that don’t depend on outputs from the failed job will still get run.
Paste this into your shell
snakemake --cores 2 --keep-going results/vcf/all.vcf
  • Snakemake tells us that 8 of the 14 jobs were successful but at least one job failed:
Snakemake's concluding comments:
8 of 14 steps (57%) done
Exiting because a job execution failed. Look above for error message
BUG: Out of jobs ready to be started, but not all files built yet. Please check https://github.com/snakemake/snakemake/issues/823 for more information.
Remaining jobs:
 - make_gvcfs: results/gvcf/A.g.vcf.gz, results/gvcf/A.g.vcf.gz.tbi
 - mark_duplicates: results/mkdup/A.bam, results/mkdup/A.bai, results/qc/mkdup_metrics/A.metrics
 - trim_reads: results/trimmed/A_R1.fastq.gz, results/trimmed/A_R1.unpaired.fastq.gz, results/trimmed/A_R2.fastq.gz, results/trimmed/A_R2.unpaired.fastq.gz
 - vcf_from_gdb: results/vcf/all.vcf
 - import_genomics_db: results/genomics_db/CM031202.1
 - map_reads: results/bam/A.bam

Cool! It tells us explicitly which jobs remain to be run. And they are exactly the ones that depend on outputs from sample A.

Here is a related, fun tip: Snakemake writes the log of every real (i.e., without the -n option) run into a log file in .snakemake/log. Try this:

ls .snakemake/log

If you want to get the log from the most recent run you can throw down some Unix:

ls -l .snakemake/log/*.log | tail -n 1 | awk '{print $NF}'

You can use that to find any lines that say Error in them to tell you more about the job that failed. e.g.:

grep Error $(ls -l .snakemake/log/*.log | tail -n 1 | awk '{print $NF}')

Re-running failed jobs is as simple as just re-starting Snakemake

  • First off, after a failure like that, we can always immediately do a dry-run to see what Snakemake must still do to finish out the workflow:
Paste this into your shell
snakemake -np results/vcf/all.vcf

It is only going to require 6 more jobs to produce results/vcf/all.vcf:

This is the end of the dry-run output
Job stats:
job                   count    min threads    max threads
------------------  -------  -------------  -------------
import_genomics_db        1              1              1
make_gvcfs                1              1              1
map_reads                 1              1              1
mark_duplicates           1              1              1
trim_reads                1              1              1
vcf_from_gdb              1              1              1
total                     6              1              1
  • We see that it still has to do 1 trim job, which is the one that failed.

  • If we noticed that data/A_R2.fastq.gz was corrupted, we could replace it with the uncorrupted version:

Paste this into your shell
cp data/A_R2.fastq.gz-ORIG data/A_R2.fastq.gz
  • Then, start it back up with 2 cores:
Paste this into your shell
snakemake --cores 2 results/vcf/all.vcf

Now that sample A is not corrupted, it finishes. Yay! That was easy.

Snakemake encourages (requires?) that your outputs all reside in a consistent directory structure

(And a side note: Snakemake automatically creates all the diretories needed to store its output files)

  • Check out all the outputs of our workflow in an easy-to-understand directory structure within results:
Paste this into your shell
# only drill down three directory levels (-L 3)
tree -L 3 results

Here is what the result looks like:

The tree listing of the full results of the workflow
results
├── bam
│   ├── A.bam
│   ├── B.bam
│   └── C.bam
├── genomics_db
│   └── CM031202.1
│       ├── CM031202.1$1$6000000
│       ├── __tiledb_workspace.tdb
│       ├── callset.json
│       ├── vcfheader.vcf
│       └── vidmap.json
├── gvcf
│   ├── A.g.vcf.gz
│   ├── A.g.vcf.gz.tbi
│   ├── B.g.vcf.gz
│   ├── B.g.vcf.gz.tbi
│   ├── C.g.vcf.gz
│   └── C.g.vcf.gz.tbi
├── logs
│   ├── bwa_index
│   │   └── bwa_index.log
│   ├── genome_dict.log
│   ├── genome_faidx.log
│   ├── import_genomics_db
│   │   └── log.txt
│   ├── make_gvcfs
│   │   ├── A.log
│   │   ├── B.log
│   │   └── C.log
│   ├── map_reads
│   │   ├── A.log
│   │   ├── B.log
│   │   └── C.log
│   ├── mark_duplicates
│   │   ├── A.log
│   │   ├── B.log
│   │   └── C.log
│   ├── trim_reads
│   │   ├── A.log
│   │   ├── B.log
│   │   └── C.log
│   └── vcf_from_gdb
│       └── log.txt
├── mkdup
│   ├── A.bai
│   ├── A.bam
│   ├── B.bai
│   ├── B.bam
│   ├── C.bai
│   └── C.bam
├── qc
│   └── mkdup_metrics
│       ├── A.metrics
│       ├── B.metrics
│       └── C.metrics
├── trimmed
│   ├── A_R1.fastq.gz
│   ├── A_R1.unpaired.fastq.gz
│   ├── A_R2.fastq.gz
│   ├── A_R2.unpaired.fastq.gz
│   ├── B_R1.fastq.gz
│   ├── B_R1.unpaired.fastq.gz
│   ├── B_R2.fastq.gz
│   ├── B_R2.unpaired.fastq.gz
│   ├── C_R1.fastq.gz
│   ├── C_R1.unpaired.fastq.gz
│   ├── C_R2.fastq.gz
│   └── C_R2.unpaired.fastq.gz
└── vcf
    ├── all.vcf
    └── all.vcf.idx

Snakemake eye-candy—visualizing the workflow dependencies

Using the --dag option, like this:

Paste this into your shell
snakemake --dag results/vcf/all.vcf | dot -Tsvg > dag.svg

Makes a directed acyclic graph (DAG) of the workflow. If you view it, it looks like this:

Snakemake eye-candy—filegraphs

Using the --filegraph option, like this:

Paste this into your shell
snakemake --filegraph results/vcf/all.vcf | dot -Tsvg > filegraph.svg

Makes a graph (DAG) of the files involved in the workflow. If you view it, it looks like this:

Here is a more complex workflow graph

The dag for a run of mega-non-model-wgs-snakeflow that was processed by my R package SnakemakeDagR

Letting Snakemake interface with SLURM

The way to most easily allow Snakemake to dispatch jobs via the SLURM scheduler is by way of the Snakemake cluster option provided in a Snakemake profile.

A Snakemake profile is a YAML file in which you can record command line options (and their arguments) for Snakemake.

There is an officially supported Snakemake profile for SLURM, but I am partial to the Unix-based (as opposed to Python-based) approach to SLURM profiles for Snakemake described at: https://github.com/jdblischak/smk-simple-slurm.

A Snakemake profile for SLURM on Alpine

Alpine is the new NSF-funded cluster in Colorado

  • A Snakemake profile is simply a collection of command line arguments stored in a YAML file.

  • The set-resources and threads blocks are specific to my lcWGS workflow

Contents of hpcc-profiles/slurm/alpine/config.yaml
cluster:
  mkdir -p results/slurm_logs/{rule} &&
  sbatch
    --partition=amilan,csu
    --cpus-per-task={threads}
    --mem={resources.mem_mb}
    --time={resources.time}
    --job-name=smk-{rule}-{wildcards}
    --output=results/slurm_logs/{rule}/{rule}-{wildcards}-%j.out
    --error=results/slurm_logs/{rule}/{rule}-{wildcards}-%j.err
    --parsable
default-resources:
  - time="08:00:00"
  - mem_mb=3740
  - tmpdir="results/snake-tmp"
restart-times: 0
max-jobs-per-second: 10
max-status-checks-per-second: 50
local-cores: 1
latency-wait: 60
cores: 2400
jobs: 950
keep-going: True
rerun-incomplete: True
printshellcmds: True
use-conda: True
rerun-trigger: mtime
cluster-status: status-sacct-robust.sh
cluster-cancel: scancel
cluster-cancel-nargs: 4000


set-threads:
  map_reads: 4
  realigner_target_creator: 4
  genomics_db_import_chromosomes: 2
  genomics_db_import_scaffold_groups: 2
  genomics_db2vcf_scattered: 2
set-resources:
  map_reads:
    mem_mb: 14960
    time: "23:59:59"
  make_gvcf_sections:
    mem_mb: 3600
    time: "23:59:59"
  genomics_db_import_chromosomes:
    mem_mb: 7480
    time: "23:59:59"
  genomics_db_import_scaffold_groups:
    mem_mb: 11000
    time: "23:59:59"
  genomics_db2vcf_scattered:
    mem_mb: 11000
    time: "23:59:59"
  multiqc_dir:
    mem_mb: 37000
  bwa_index:
    mem_mb: 37000
  realigner_target_creator:
    mem_mb: 14960

We’ve only scratched the surface

  • You can specify fine-grained conda environments for each rule
  • Python code is allowed in most places in the Snakefile
  • Input functions can be quite useful (or absolutely essential)
  • You can benchmark every job instance of a rule, which records the resources used (time, memory, etc.)

Benchmarking jobs

This is super easy. You add to each rule a path for a benchmark file. For example:

rule map_reads:
  input:
    r1="results/trimmed/{sample}_R1.fastq.gz",
    r2="results/trimmed/{sample}_R2.fastq.gz",
    genome="resources/genome.fasta",
    idx=multiext("resources/genome.fasta", ".amb", ".ann", ".bwt", ".pac", ".sa")
  output:
    "results/bam/{sample}.bam"
  log:
    "results/logs/map_reads/{sample}.log"
  benchmark:
    "results/benchmarks/map_reads/{sample}.bmk"
  params:
    RG="-R '@RG\\tID:{sample}\\tSM:{sample}\\tPL:ILLUMINA' "
  shell:
    " (bwa mem {params.RG} {input.genome} {input.r1} {input.r2} | "
    " samtools view -u | "
    " samtools sort - > {output}) 2> {log} "

Then, each time that rule gets run (with a different wildcard) you get a little file like this:

s   h:m:s   max_rss max_vms max_uss max_pss io_in   io_out  mean_load   cpu_time
164.7159    0:02:44 293.05  3523.55 263.11  269.15  1740.09 0.12    76.83   127.05
  • This tells you how many seconds it ran (s), the max RAM use (max_rss), amount of data read from disk and written to disk, total amount of CPU time, etc.
  • Super helpful
  • Easy to extract

Benchmark Example

Jobs processing 43 rockfish on NMFS on-premises cluster (SEDNA) vs in the cloud on Microsoft Azure (AZHOP)

Benchmark Example

Super easy to extract that for all the rules with a little Unix and R

Where to from here?

You might be interested in having a look at a workflow I wrote for whole genome sequencing of non-model organisms: https://github.com/eriqande/mega-non-model-wgs-snakeflow.

This provides a complete BWA-GATK workflow including an arbitrary number of “bootstrapped-BQSR” rounds (which turn out to be pretty useless…)

Setting this up for yourself

To use Snakemake on your own server/cluster/laptop you should:

  1. Install Mambaforge as described here
  2. Create a new conda environment for snakemake as described here
# i.e.:
mamba create -c conda-forge -c bioconda -n snakemake snakemake
  1. Then:
conda activate snakemake

Some notes: mamba is far faster (and better in almost all cases) than conda. It is also practically required for using snakemake conda blocks.

Final Thoughts

  • Learning snakemake may require a bit of an investment, BUT…
  • For anyone doing a lot of bioinformatic processing of sequence data it is quite a sound investment.

One final step on the command line:

conda deactivate