Conservation Genomics Workshop, Wednesday August 30, 2023
What the Heck is Snakemake?
A Python-based “Workflow Management System”
Allows you to define a complex (bioinformatic) workflow as a series of steps that involve input files and output files.
It identifies the dependencies between the steps and then runs all the steps needed to create a requested output file.
This greatly simplifies the orchestration of bioinformatics, and makes it much easier to find and re-run failed jobs.
Incredibly valuable for reproducible research:
Not just so others can reproduce your results
Also useful for you to quickly run your workflow on different clusters, etc.
That sounds pretty jargony!
Illustrate with an example
Hope that it piques the curiosity of some
Our Small Example: GATK Best Practices “Light”
flowchart TD
A(fastq files from 3 samples: our raw data) --> B(Trim the reads: trimmomatic)
B --> C(Map the reads to a reference genome: bwa mem)
C --> D(Mark PCR and optical duplicates: MarkDuplicates)
D --> E(Make gVCF files for each sample: HaplotypeCaller)
E --> F(Load gVCFs into Genomic DB: GenomicsDBImport)
F --> G(Create VCFs from Genomic DB: GenotypeGVCFs)
A mini data set that only takes about 1.5 minutes to run through the major steps of a GATK-like variant calling workflow
Heavily subsampled Chinook salmon reads.
Three paired-end fastqs, and data only from three or four chromosomes.
We will trim it, map it, mark duplicates, then make one gVCF file per individual.
Then, to save time, we call variants only on one chromosome: CM031202.1.
Setting up our workspaces
It should be simple to set this up. The code at right does the following:
cd to the Snakemake-Example data directory insdie your home directory
Activate the snakemake-example conda environment.
Test to make sure that you have snakemake on your PATH
# From the home directory of your ConGen servercd ~/Snakemake-Exampleconda activate snakemake-example# To make sure it is working, print the help information# for snakemakesnakemake--help
Initial Configuration of our work directory
We can use the Unix tree utility to see what the Snakemake-Example directory contains.
Within the Snakemake-Example directory, type tree at the command line. This shows:
A README.md with installation instructions
A Snakefile. Much more about that later.
A directory data with three pairs of FASTQ files
A directory env that has information to install necessary software with conda
A directory resources that has two subdirectories
adapters: info for trimming Illumina adapters
genome.fasta: a FASTA file with the reference genome
flowchart TD
H(fastq files from 3 samples: our raw data) --> I(Trim the reads: trimmomatic)
I --> J(Map the reads to a reference genome: bwa mem)
Some pseudo-shell code
# cycle over fastqs and do the trimmingfor S in A B C;dotrimmomatic data/${S}_R1.fastq.gz data/S{S}_R2.fastq.gz \ trimmed/${S}_R1.fastq.gz trimmed/${S}_R1_unpaired.fastq.gz \ trimmed/${S}_R2.fastq.gz trimmed/${S}_R2_unpaired.fastq.gz \ other-arguments-etc...done# cycle over trimmed fastqs and do the mappingfor S in A B C;dobwa mem resources/genome.fasta \ trimmed/${S}_R1.fastq.gz trimmed/${S}_R2.fastq.gzdone
What are some issues here?
Ah crap! I forgot to index genome.fasta!
This does not run the jobs in parallel!
Possible solutions for #2?
You can get things done in parallel using SLURM’s sbatch (which you probably need to use anyway).
Going about doing this with SLURM (a sketch…)
Consider the first two “steps”
flowchart TD
H(fastq files from 3 samples: our raw data) --> I(Trim the reads: trimmomatic)
I --> J(Map the reads to a reference genome: bwa mem)
Some pseudo-shell code
# cycle over fastqs and dispatch each trimming job to SLURMfor S in A B C;dosbatch my-trim-script.sh $Sdone# ONCE ALL THE TRIMMING IS DONE...# cycle over trimmed fastqs and dispatch each mapping job to SLURMfor S in A B C;dosbatch my-map-script $Sdone
What is not-so-great about this?
I have to wait for all the jobs of each step to finish
I have to explicitly start each “next” step.
If some jobs of a step fail, it is a PITA to go back and figure out which ones failed.
The dependence between the outputs of the trimming step and the mapping step are implicit based on file paths buried in the scripts, rather than explicit.
The Advantages of Snakemake
The dependence between input and output files is explicit
This lets snakemake identify every single job that must be run—and the order they must be run in—for the entire workflow (all the steps)
This maximizes the number of jobs that can be run at once.
The necessary steps are determined by starting from the ultimate outputs that are desired or requested…
…then working backward through the dependencies to identify which jobs must be run to eventually get the ultimate output.
This greatly simplifies the problem of re-running any jobs that might have failed for reasons “known only to the cluster.”
Snakemake is a program that interprets a set of rules stored in a Snakefile
Some explanations:
Rule blocks: the fundamental unit
Correspond to “steps” in the workflow
Keyword “rule” + name + colon
Indenting like Python/YAML
Typically includes sub-blocks of input, output, and shell
(Screen grab from Sublime Text which has great highlighting for Snakemake)
The rule:
Requires the input file resources/genome.fasta
Produces the output file resources/genome.dict
Writes to a log file in results/logs/genome_dict.log
Uses the shell code samtools dict {input} > {output} 2> {log} to get the job done
What are those purple bits? {input}, {output}, and {log}?! in the shell code?
That is the syntax snakemake uses to substitute the values in the output, input, or log blocks (or other blocks…) into the Unix shell command.
Big Note: Output and log information is not written automatically to the output file and log file, nor is input taken automatically from the input file—you have to dicate that behavior by what you write in the shell block!
Thus, when this rule runs, the shell command executed will be:
We “drive” Snakemake by requesting the creation of output files
These output files are sometimes referred to as “targets”
snakemake looks for and uses the Snakefile in the current working directory.
Option -n tells snakemake to do a “dry-run:” (Just say what you would do, but don’t do it!)
Option -p tells snakemake to print the shell commands of the rules.
Those two options can be combined: -np
And we request resources/genome.dict as a target by just putting it on the command line:
Paste this into your shell
snakemake-np resources/genome.dict
And the output you got from that should look like:
What the output should look like
Building DAG of jobs...Job stats:job count min threads max threads--------------------------------------------genome_dict 1 1 1total 1 1 1[Fri Sep 2 09:34:41 2022]rule genome_dict:input: resources/genome.fastaoutput: resources/genome.dictlog: results/logs/genome_dict.logjobid: 0resources: tmpdir=/var/folders/xg/mz_qt7q54yv_hwzvhskwx2c00000gp/Tsamtools dict resources/genome.fasta > resources/genome.dict 2> results/logs/genome_dict.logJob stats:job count min threads max threads--------------------------------------------genome_dict 1 1 1total 1 1 1This was a dry-run (flag-n). The order of jobs does not reflect the order of execution.
Let’s actually run that!
Remove the -np option and add --cores 1 to tell snakemake to run the requested jobs on one core
Paste this into your shell
snakemake--cores 1 resources/genome.dict
The output you get looks like what you saw before, but in this case the requested output file has been created.
And a log capturing stderr (if any) was created:
Paste this into your shell to see all the files
tree .
The output shows those two new files that were created
Output should look like this:
.├── README.md├── Snakefile├── data│ ├── A_R1.fastq.gz│ ├── A_R2.fastq.gz│ ├── B_R1.fastq.gz│ ├── B_R2.fastq.gz│ ├── C_R1.fastq.gz│ └── C_R2.fastq.gz├── env│ └── snakemake-example.yml├── resources│ ├── adapters│ │ ├── NexteraPE-PE.fa│ │ ├── TruSeq2-PE.fa│ │ ├── TruSeq2-SE.fa│ │ ├── TruSeq3-PE-2.fa│ │ ├── TruSeq3-PE.fa│ │ └── TruSeq3-SE.fa│ ├── genome.dict <--- THIS IS A NEW FILE!│ └── genome.fasta└── results└── logs└── genome_dict.log <--- THIS IS A NEW FILE!
Once a target file is created or updated Snakemake knows it
If you request the file resources/genome.dict from Snakemake now, it tells you that the file is there and does not need updating.
Paste this into your shell
snakemake-np resources/genome.dict
Because resources/genome.dict already exists (and none of its dependencies have been updated since it was created) Snakemake tells you this:
Expected output from Snakemake
Building DAG of jobs...Nothing to be done (all requested files are present and up to date).
This helps you to not remake output files that don’t need remaking!
Wildcards: How Snakemake manages replication
Wildcards allow running multiple instances of the same rule on different input files by simple pattern matching
If we request from Snakemake the file results/trimmed/A_R1.fastq.gz,
then, Snakemake recognizes that this matches the output of rule trim_reads with the wildcard {sample} replaced by A.
And Snakemake propagates the value A of the wildcard {sample} to the input block.
Thus Snakemake knows that to create results/trimmed/A_R1.fastq.gz
it needs the input files:
data/A_R1.fastq.gz
data/A_R2.fastq.gz
Try requesting those trimmed fastq files
See what snakemake would do when you ask for results/trimmed/A_R1.fastq.gz.
Paste this into your shell
snakemake-np results/trimmed/A_R1.fastq.gz
Note that you can request files from more than one sample:
Note that it will go ahead and start all those jobs independently, and concurrently, because they do not depend on one another. This is how Snakemake manages and maximizes parallelism.
Chains of file dependencies
If Snakemake does not find a required input file for a rule that provides a requested output, it searches through the outputs of all the other rules in the Snakefile to find a rule that might provide the required input file as one of its outputs.
It then schedules all the necessary rules to run.
This means that an entire workflow with thousands of jobs can be triggered by requesting a single output file.
Short Group Activity
Trace the rules needed if we request the file results/vcf/all.vcf.
After we look at that, and discuss, let’s actually run it, using 2 cores:
Paste this into your shell
snakemake-p--cores 2 results/vcf/all.vcf
That should take a minute or two.
If you try to run the workflow again, Snakemake tells you that you do not need to, because everything is up to date: Try running the above line again:
Paste this into your shell
snakemake-p--cores 2 results/vcf/all.vcf
If any inputs change, Snakemake will re-run the rules that depend on the new input
Imagine that the sequencing center calls us to say that there has been a terrible mistake and they are sending you new (and correct) versions of data for sample C: C_R1.fastq.gz and C_R2.fastq.gz
Snakemake uses file modification dates to check if any inputs have been updated after target outputs have been created.
So we can simulate new fastq files for sample C by using the touch command to update the fastq file modification dates:
Paste this into your shell
touch data/C_R1.fastq.gz data/C_R2.fastq.gz
Now, when we run Snakemake again, it tells us we have to run more jobs, but only the ones that depend on data from sample C. Do a dry run to check that:
Paste this into your shell
snakemake-np results/vcf/all.vcf
Check that it will not re-run the trimming, mapping, and gvcf-making steps for samples A and B, which are already done.
Snakemake makes it very easy to re-run failed jobs
Clusters and computers fail (sometimes for no apparent reason) occasionally
If this happens in a large, traditionally managed (Unix script) workflow, finding and re-running the failures can be hard.
Example: 7 birds out of 192 fail on HaplotypeCaller because those jobs got sent to nodes without AVX acceleration.
Five years ago, setting up custom scripts to re-run just those 7 birds could cost me an hour—about as much time as it takes me now to set up an entire workflow with Snakemake.
On the next slide we are going to create a job failure to see how easy it is to re-run jobs that failed with Snakemake.
Simulating a job failure as an example
First, let’s remove the entire results directory, so that we have to re-run most of our workflow.
Paste this into your shell
rm-rf results
Now, we are going to corrupt the read-2 fastq file for sample A (but keeping a copy of the original)
Now, run it with 2 cores and give it the --keep-going command which means that even if an error occurs on one job, all the other jobs that don’t depend on outputs from the failed job will still get run.
Snakemake tells us that 8 of the 14 jobs were successful but at least one job failed:
Snakemake's concluding comments:
8 of 14 steps (57%)doneExiting because a job execution failed. Look above for error messageBUG: Out of jobs ready to be started, but not all files built yet. Please check https://github.com/snakemake/snakemake/issues/823 for more information.Remaining jobs:- make_gvcfs: results/gvcf/A.g.vcf.gz, results/gvcf/A.g.vcf.gz.tbi- mark_duplicates: results/mkdup/A.bam, results/mkdup/A.bai, results/qc/mkdup_metrics/A.metrics- trim_reads: results/trimmed/A_R1.fastq.gz, results/trimmed/A_R1.unpaired.fastq.gz, results/trimmed/A_R2.fastq.gz, results/trimmed/A_R2.unpaired.fastq.gz- vcf_from_gdb: results/vcf/all.vcf- import_genomics_db: results/genomics_db/CM031202.1- map_reads: results/bam/A.bam
Cool! It tells us explicitly which jobs remain to be run. And they are exactly the ones that depend on outputs from sample A.
Here is a related, fun tip: Snakemake writes the log of every real (i.e., without the -n option) run into a log file in .snakemake/log. Try this:
ls .snakemake/log
If you want to get the log from the most recent run you can throw down some Unix:
The way to most easily allow Snakemake to dispatch jobs via the SLURM scheduler is by way of the Snakemake cluster option provided in a Snakemake profile.
A Snakemake profile is a YAML file in which you can record command line options (and their arguments) for Snakemake.
There is an officially supported Snakemake profile for SLURM, but I am partial to the Unix-based (as opposed to Python-based) approach to SLURM profiles for Snakemake described at: https://github.com/jdblischak/smk-simple-slurm.
A Snakemake profile for SLURM on Alpine
Alpine is the new NSF-funded cluster in Colorado
A Snakemake profile is simply a collection of command line arguments stored in a YAML file.
The set-resources and threads blocks are specific to my lcWGS workflow
Contents of hpcc-profiles/slurm/alpine/config.yaml
This tells you how many seconds it ran (s), the max RAM use (max_rss), amount of data read from disk and written to disk, total amount of CPU time, etc.
Super helpful
Easy to extract
Benchmark Example
Jobs processing 43 rockfish on NMFS on-premises cluster (SEDNA) vs in the cloud on Microsoft Azure (AZHOP)
Benchmark Example
Super easy to extract that for all the rules with a little Unix and R