diff --git a/develop/config/index.html b/develop/config/index.html index 00a37e5..5ed96ac 100644 --- a/develop/config/index.html +++ b/develop/config/index.html @@ -909,6 +909,13 @@
qualimap:
Perform Qualimap bamqc on bam files for general quality stats
(true
/false
)ibs_ref_bias:
Enable reference bias calculation. For each sample, one read
+ is randomly sampled at each position and compared to the reference base.
+ These are summarized as the proportion of the genome that is identical by
+ state to the reference for each sample to quantify reference bias. This is
+ done for all filter sets as well as for all sites without site filtering.
+ If transition removal or other arguments are passed to ANGSD, they are
+ included here. (true
/false
)damageprofiler:
Estimate post-mortem DNA damage on historical samples
with Damageprofiler (true
/false
) NOTE: This just adds the addition of
Damageprofiler to the already default output of MapDamage.PopGLen is aimed at enabling users to run population genomic analyses on their data within a genotype likelihood framework in an automated and reproducible fashion. Genotype likelihood based analyses avoid genotype calling, instead performing analyses on the likelihoods of each possible genotype, incorporating uncertainty about the true genotype into the analysis. This makes them especially suited for datasets with low coverage or that vary in coverage.
This pipeline was developed in large part to make my own analyses easier. I work with many species being mapped to their own references within the same project. I developed this pipeline so that I could ensure standardized processing for datasets within the same project and to automate the many steps that go into performing these analyses. As it needed to fit many datasets, it is generalizable and customizable through a single configuration file and uses a common workflow utilized by ANGSD users, so it is available for others to use, should it suit their needs.
Questions? Feature requests? Just ask!I'm glad to answer questions on the GitHub Issues page for the project, as well as take suggestions for features or improvements!
"},{"location":"#pipeline-summary","title":"Pipeline Summary","text":"The pipeline aims to follow the general path many users will use when working with ANGSD and other GL based tools. Raw equencing data is processed into BAM files (with optional configuration for historical degraded samples) or BAM files are provided directly. From there several quality control reports are generated to help determine what samples are included. The pipeline then builds a 'sites' file to perform analyses with. This sites file is made from several user-configured filters, intersecting all and outputing a list of sites for the analyses to be performed on across all samples. This can also be extended by user-provided filter lists (e.g. to limit to neutral sites, genic regions, etc.).
After samples have been processed, quality control reports produced, and the sites file has been produced, the pipeline can continue to the analyses.
These all can be enabled and processed independently, and the pipeline will generate genotype likelihood input files using ANGSD and share them across analyses as appropriate, deleting temporary intermediate files when they are no longer needed.
At any point after a successful completion of a portion of the pipeline, a report can be made that contains tables and figures summarizing the results for the currently enabled parts of the pipeline.
If you're interested in using this, head to the Getting Started page!
"},{"location":"config/","title":"Configuring the workflow","text":"Running the workflow requires configuring three files: config.yaml
, samples.tsv
, and units.tsv
. config.yaml
is used to configure the analyses, samples.tsv
categorizes your samples into groups, and units.tsv
connects sample names to their input data files. The workflow will use config/config.yaml
automatically, but you can name this whatever you want (good for separating datasets in same working directory) and point to it when running snakemake with --configfile <path>
.
samples.tsv
","text":"This file contains your sample list, and has four tab separated columns:
sample\tpopulation\ttime\tdepth\nhist1\tHjelmseryd\thistorical\tlow\nhist2\tHjelmseryd\thistorical\tlow\nhist3\tHjelmseryd\thistorical\tlow\nmod1\tGotafors\tmodern\thigh\nmod2\tGotafors\tmodern\thigh\nmod3\tGotafors\tmodern\thigh\n
sample
contains the ID of a sample. It is best if it only contains alphanumeric characters.
population
contains the population the sample comes from and will be used to group samples for population-level analyses.
time
sets whether a sample should be treated as fresh DNA or historical DNA in the sequence processing workflow. Doesn't change anything if you're starting with bam files.
depth
puts the sample in a sequencing depth category. Used for filtering - if enabled in the configuration, extreme depth filters will be performed for depth categories individually.
units.tsv
","text":"This file connects your samples to input files and has a potential for eight tab separated columns:
sample\tunit\tlib\tplatform\tfq1\tfq2\tbam\tsra\nhist1\tBHVN22DSX2.2\thist1\tILLUMINA\tdata/fastq/hist1.r1.fastq.gz\tdata/fastq/hist1.r2.fastq.gz\t\nhist1\tBHVN22DSX2.3\thist1\tILLUMINA\tdata/fastq/hist1.unit2.r1.fastq.gz\tdata/fastq/hist1.unit2.r2.fastq.gz\t\nhist2\tBHVN22DSX2.2\thist2\tILLUMINA\tdata/fastq/hist2.r1.fastq.gz\tdata/fastq/hist2.r2.fastq.gz\t\nhist3\tBHVN22DSX2.2\thist2\tILLUMINA\tdata/fastq/hist3.r1.fastq.gz\tdata/fastq/hist3.r2.fastq.gz\t\nmod1\tAHW5NGDSX2.3\tmod1\tILLUMINA\tdata/fastq/mod1.r1.fastq.gz\tdata/fastq/mod1.r2.fastq.gz\t\nmod2\tAHW5NGDSX2.3\tmod2\tILLUMINA\t\t\tdata/bam/mod2.bam\nmod3\tAHW5NGDSX2.3\tmod3\tILLUMINA\tdata/fastq/mod3.r1.fastq.gz\tdata/fastq/mod3.r2.fastq.gz\t\nSAMN13218652\tSRR10398077\tSAMN13218652\tILLUMINA\t\t\t\tSRR10398077\n
sample
contains the ID of a sample. Must be same as in samples.tsv
and may be listed multiple times when inputting multiple sequencing runs/libraries.unit
contains the sequencing unit, i.e. the sequencing lane barcode and lane number. This is used in the PU and (part of) the ID read groups. If you don't have multiple sequencing lanes per samples, this won't impact anything. Doesn't do anything when using bam input.lib
contains the name of the library identifier for the entry. Fills in the LB and (part of) the ID read groups and is used for PCR duplicate removal. Best practice would be to have the combination of unit
and lib
to be unique per line. An easy way to use this is to use the Illumina library identifier or another unique library identifier, or simply combine a generic name with the sample name (sample1A, sample1B, etc.). Doesn't do anything when using bam input.platform
is used to fill the PL read group. Commonly is just 'ILLUMINA'. Doesn't do anything when using bam input.fq1
and fq2
provides the absolute or relative to the working directory paths to the raw sequencing files corresponding to the metadata in the previous columns.bam
provides the absolute or relative to the working directory path of pre-processed bam files. Only one bam files should be provided per sample in the units file.sra
provides the NCBI SRA accession number for a set of paired end fastq files that will be downloaded to be processed. If a sample has multiple runs you would like to include, each run should be its own line in the units sheet, just as separate sequencing runs would be.Mixing samples with different starting points
It is possible to have different samples start from different inputs (i.e. some from bam, others from fastq, others from SRA). It is best to provide only fq1
+fq2
, bam
, or sra
for each sample to be clear where each sample starts. If multiple are provided for the same sample, the bam file will override fastq or SRA entries, and the fastq will override SRA entries. Note that this means it is not currently possible to have multiple starting points for the same sample (i.e. FASTQ reads that would be processed then merged into an existing BAM).
config.yaml
contains the configuration for the workflow, this is where you will put what analyses, filters, and options you want. Below I describe the configuration options. The config.yaml
in this repository serves as a template, but includes some 'default' parameters that may be good starting points for some users. If --configfile
is not specified in the snakemake command, the workflow will default to config/config.yaml
.
Required configuration of the 'dataset'.
samples:
An absolute or relative path from the working directory to the samples.tsv
file.units:
An absolute or relative path from the working directory to the units.tsv
file.dataset:
A name for this dataset run - essentially, an identifier for a batch of samples to be analysed together with the same configuration.Here, dataset means a set of samples and configurations that the workflow will be run with. Each dataset should have its own samples.tsv
and config.yaml
, but the same units.tsv
can be used for multiple if you prefer. Essentially, what the dataset identifier does is keeps your outputs organized into projects, so that the same BAM files can be used in multiple datasets without having to be remade.
So, say you have dataset1_samples.tsv
and dataset2_samples.tsv
, with corresponding dataset1_config.tsv
and dataset2_config.yaml
. The sample files contain different samples, though some are shared between the datasets. The workflow for dataset1 can be run, and then dataset2 can be run. When dataset2 runs, it map new samples, but won't re-map samples processed in dataset1. Each will perform downstream analyses independently with their sample set and configuration files, storing these results in dataset specific folders.
Required configuration of the reference.
chunk_size:
A size in bp (integer). Your reference will be analyzed in 'chunks' of contigs of this size to parallelize processing. This size should be larger than the largest contig in your genome. A larger number means fewer jobs that run longer. A smaller number means more jobs that run shorter. The best fit will depend on the reference and the compute resources you have available. Leaving this blank will not divide the reference up into chunks (but this isn't optimized yet, so it will do a couple unnecessary steps).
reference:
name:
A name for your reference genome, will go in the file names.fasta:
A path to the reference fasta file (currently only supports uncompressed fasta files)mito:
Mitochondrial contig name(s), will be removed from analysis. Should be listed within brackets []sex-linked:
Sex-linked contig name(s), will be removed from analysis. Should be listed within brackets []exclude:
Additional contig name(s) to exclude from analysis. Should be listed within brackets []min_size:
A size in bp (integer). All contigs below this size will be excluded from analysis.
ancestral:
A path to a fasta file containing the ancestral states in your reference genome. This is optional, and is used to polarize allele frequencies in SAF files to ancestral/derived. If you leave this empty, the reference genome itself will be used as ancestral, and you should be sure the [params
] [realSFS
] [fold
] is set to 1
. If you put a fasta here, you can set that to 0
.
Reference genomes should be uncompressed, and contig names should be clear and concise. Currently, there are some issues parsing contig names with underscores, so please change these in your reference before running the pipeline. Alphanumeric characters, as well as .
in contig names have been tested to work so far, other symbols have not been tested.
Potentially the ability to use bgzipped genomes will be added, I just need to check that it works with all underlying tools. Currently, it will for sure not work, as calculating chunks is hard-coded to work on an uncompressed genome.
"},{"location":"config/#sample-set-configuration","title":"Sample Set Configuration","text":"exclude_ind:
Sample name(s) that will be excluded from the workflow. Should be a list in []. Putting a #
in front of the sample in the sample list also works. Mainly used to drop samples with poor quality after initial processing.excl_pca-admix:
Sample name(s) that will be excluded only from PCA and Admixture analyses. Useful for close relatives that violate the assumptions of these analyses, but that you want in others. Should be a list in []. If you want relatives out of all downstream analyses, not just PCA/Admix, put them in exclude_ind
instead. Note this will trigger a re-run for relatedness analyses, but you can just disable them now as they've already been run.Here, you will define which analyses you will perform. It is useful to start with only a few, and add more in subsequent workflow runs, just to ensure you catch errors before you use compute time running all analyses. Most are set with (true
/false
) or a value, described below. Modifications to the settings for each analysis are set in the next section.
populations:
A list of populations found in your sample list to limit population analyses to. Might be useful if you want to perform individual analyses on some samples but not include them in any population level analyses. Leave blank ([]
) if you want population level analyses on all the populations defined in your samples.tsv
file.
analyses:
mapping:
historical_only_collapsed:
Historical samples are expected to have fragmented DNA. For this reason, overlapping (i.e. shorter, usually <270bp) read pairs are collapsed in this workflow for historical samples. Setting this option to true
will only map only these collapsed reads, and is recommended to target primarily endogenous content. However, in the event you want to map both the collapsed and uncollapsed reads, you can set this to false
. (true
/false
)historical_collapsed_aligner:
Aligner used to map collapsed historical sample reads. aln
is the recommended for this, but this is here in case you would like to select mem
for this. Uncollapsed historical reads will be mapped with mem
if historical_only_collapsed
is set to false
, regardless of what is put here. (aln
/mem
)pileup-mappability:
Filter out sites with low 'pileup mappability', which describes how uniquely fragments of a given size can map to the reference (true
/false
)repeatmasker:
(NOTE: Only one of the four options should be filled/true)bed:
Supply a path to a bed file that contains regions with repeats. This is for those who want to filter out repetitive content, but don't need to run Repeatmodeler or masker in the workflow because it has already been done for the genome you're using. Be sure the contig names in the bed file match those in the reference supplied. GFF or other filetypes that work with bedtools subtract
may also work, but haven't been tested.local_lib:
Filter repeats by masking with an already made library you have locally (such as ones downloaded for Darwin Tree of Life genomes). Should be file path, not a URL.dfam_lib:
Filter repeats using a library available from dfam. Should be a taxonomic group name.build_lib:
Use RepeatModeler to build a library of repeats from the reference itself, then filter them from analysis (true
/false
).extreme_depth:
Filter out sites with extremely high or low global sequencing depth. Set the parameters for this filtering in the params
section of the yaml. (true
/false
)dataset_missing_data:
A floating point value between 0 and 1. Sites with data for fewer than this proportion of individuals across the whole dataset will be filtered out in all analyses using the filtered sites file. (This is only needed if you need to ensure all your populations are using exactly the same sites, which I find may result in coverage biases in results, especially heterozygosity. Unless you explicitly need to ensure all groups and analyses use the same sites, I would leave this blank, instead using the [params
][angsd
][minind_pop
] to set a minimum individual threshold for each analyses, allowing analyses to maximize sites per group/sample. This is how most papers do it.)population_missing_data:
A floating point value between 0 and 1. Sites with data for fewer than this proportion of individuals in any population will be filtered out in all populations using the filtered sites file. (This is only needed if you need to ensure all your populations are using exactly the same sites, which I find may result in coverage biases in results, especially heterozygosity. Unless you explicitly need to ensure all groups and analyses use the same sites, I would leave this blank, instead using the [params
][angsd
][minind_pop
] to set a minimum individual threshold for each analyses, allowing analyses to maximize sites per group/sample. This is how most papers do it.)qualimap:
Perform Qualimap bamqc on bam files for general quality stats (true
/false
)damageprofiler:
Estimate post-mortem DNA damage on historical samples with Damageprofiler (true
/false
) NOTE: This just adds the addition of Damageprofiler to the already default output of MapDamage.mapdamage_rescale:
Rescale base quality scores using MapDamage2 to help account for post-mortem damage in analyses (if you only want to assess damage, use damageprofiler instead, they return the same results) (true
/false
) docsestimate_ld:
Estimate pairwise linkage disquilibrium between sites with ngsLD for each popualation and the whole dataset. Note, only set this if you want to generate the LD estimates for use in downstream analyses outside this workflow. Other analyses within this workflow that require LD estimates (LD decay/pruning) will function properly regardless of the setting here. (true
/false
)ld_decay:
Use ngsLD to plot LD decay curves for each population and for the dataset as a whole (true
/false
)pca_pcangsd:
Perform Principal Component Analysis with PCAngsd. Currently requires at least 4 samples to finish, as it will by default try to plot PCs1-4. (true
/false
)admix_ngsadmix:
Perform admixture analysis with NGSadmix (true
/false
)relatedness:
Can be performed multiple ways, set any combination of the three options below. Note, that I've mostly incorporated these with the R0/R1/KING kinship methods in Waples et al. 2019, Mol. Ecol. in mind. These methods differ slightly from how they implement this method, and will give slightly more/less accurate estimates of kinship depending on your reference's relationship to your samples. ibsrelate_ibs
uses the probabilities of all possible genotypes, so should be the most accurate regardless, but can use a lot of memory and take a long time with many samples. ibsrelate_sfs
is a bit more efficient, as it does things in a pairwise fashion in parallel, but may be biased if the segregating alleles in your populations are not represented in the reference. ngsrelate
uses several methods, one of which is similar to ibsrelate_sfs
, but may be less accurate due to incorporating in less data. In my experience, NGSrelate is suitable to identify up to third degree relatives in the dataset, but only if the exact relationship can be somewhat uncertain (i.e. you don't need to know the difference between, say, parent/offspring and full sibling pairs, or between second degree and third degree relatives). IBSrelate_sfs can get you greater accuracy, but may erroneously inflate kinship if your datset has many alleles not represented in your reference. If you notice, for instance, a large number of third degree relatives (KING ~0.03 - 0.07) in your dataset that is not expected, it may be worth trying the IBS based method (ibsrelate_ibs
).ngsrelate:
Co-estimate inbreeding and pairwise relatedness with NGSrelate (true
/false
)ibsrelate_ibs:
Estimate pairwise relatedness with the IBS based method from Waples et al. 2019, Mol. Ecol.. This can use a lot of memory, as it has genotype likelihoods for all sites from all samples loaded into memory, so it is done per 'chunk', which still takes a lot of time and memory. (true
/false
)ibsrelate_sfs:
Estimate pairwise relatedness with the SFS based method from Waples et al. 2019, Mol. Ecol.. Enabling this can greatly increase the time needed to build the workflow DAG if you have many samples. As a form of this method is implemented in NGSrelate, it may be more efficient to only enable that. (true
/false
)1dsfs:
Generates a one dimensional site frequency spectrum for all populations in the sample list. Automatically enabled if thetas_angsd
is enabled. (true
/false
)1dsfs_boot:
Generates N bootstrap replicates of the 1D site frequency spectrum for each population. N is determined from the sfsboot
setting below (true
/false
)2dsfs:
Generates a two dimensional site frequency spectrum for all unique populations pairings in the sample list. Automatically enabled if fst_angsd
is enabled. (true
/false
)2dsfs_boot:
Generates N bootstrap replicates of the 2D site frequency spectrum for each population pair. N is determined from the sfsboot
setting below (true
/false
)thetas_angsd:
Estimate pi, theta, and Tajima's D for each population in windows across the genome using ANGSD (true
/false
)heterozygosity_angsd:
Estimate individual genome-wide heterozygosity using ANGSD. Calculates confidence intervals from bootstraps. (true
/false
)fst_angsd:
Estimate pairwise $F_{ST}$ using ANGSD. Set one or both of the below options. Estimates both globally and in windows across the genome.populations:
Pairwise $F_{ST}$ is calculated between all possible population pairs (true
/false
)individuals:
Pairwise $F_{ST}$ is calculated between all possible population pairs. NOTE: This can be really intensive on the DAG building process, so I don't recommend enabling unless you're certain you want this (true
/false
)inbreeding_ngsf-hmm:
Estimates inbreeding coefficients and runs of homozygosity using ngsF-HMM. Output is converted into an inbreeding measure $F_ROH$, which describes the proportion of the genome in runs of homozygosity over a certain length. (true
/false
)ibs_matrix:
Estimate pairwise identity by state distance between all samples using ANGSD. (true
/false
)As this workflow is aimed at low coverage samples, its likely there might be considerable variance in sample depth. For this reason, it may be good to subsample all your samples to a similar depth to examine if variation in depth is influencing results. To do this, set an integer value here to subsample all your samples down to and run specific analyses. This subsampling can be done in reference to the unfiltered sequencing depth, the mapping and base quality filtered sequencing depth, or the filtered sites sequencing depth. The latter is recommended, as this will ensure that sequencing depth is made uniform at the analysis stage, as it is these filtered sites that analyses are performed for.
subsample_dp:
A mean depth to subsample your reads to. This will be done per sample, and subsample from all the reads. If a sample already has the same, or lower, depth than this number, it will just be used as is in the analysis. (INT)subsample_by:
This determines how the 'full' sequencing depth of a sample is calculated to determine the amount of subsampling needed to reach the target depth. This should be one of three options: (1) \"unfilt\"
will treat the sequencing depth of all (unfiltered) reads and sites as the 'full' depth; (2) \"mapqbaseq\"
will filter out reads that don't pass the configured mapping or base quality, then calculate depth across all sites as the 'full' depth, (3) \"sitefilt\"
will filter reads justa as \"mapqbaseq\"
does, but will only calculate the 'full' depth from sites passing the sites filter. As the main goal of subsampling is to make depth uniform for analyses, this latter option is preferred, as it will most accurately bring the depth of the samples to the target depth for analyses. (\"unfilt\"
/\"mapqbaseq\"
/\"sitefilt\"
)redo_depth_filts
: If subsample_by
is set to \"unfilt\"
or \"mapqbaseq\"
, then it would be possible to recaculate extreme depth filters for the subsampled dataset. Enable this to do so, otherwise, the depth filters from the full depth bams will be used. If subsample_by
is set to \"sitefilt\"
this will have no effect, as the subsampling is already in reference to a set site list. (true
/false
)drop_samples
: When performing depth subsampling, you may want to leave some samples out that you kept in your 'full' dataset. These can be listed here and they will be removed from ALL depth subsampled analyses. A use case for this might be if you have a couple samples that are below your targeted subsample depth, and you don't want to include them. (list of strings: []
)subsample_analyses:
Individually enable analyses to be performed with the subsampled data. These are the same as the ones above in the analyses section. Enabling here will only run the analysis for the subsampled data, if you want to run it for the full data as well, you need to enable it in the analyses section as well. (true
/false
)By default, this workflow will perform all analyses requested in the above section on all sites that pass the filters set in the above section. These outputs will contain allsites-filts
in the filename and in the report. However, many times, it is useful to perform an analysis on different subsets of sites, for instance, to compare results for genic vs. intergenic regions, neutral sites, exons vs. introns, etc. Here, users can set an arbitrary number of additional filters using BED files. For each BED file supplied, the contents will be intersected with the sites passing the filters set in the above section, and all analyses will be performed additionally using those sites.
For instance, given a BED file containing putatively neutral sites, one could set the following:
filter_beds:\n neutral-sites: \"resources/neutral_sites.bed\"\n
In this case, for each requested analysis, in addition to the allsites-filts
output, a neutral-filts
(named after the key assigned to the BED file in config.yaml
) output will also be generated, containing the results for sites within the specified BED file that passed any set filters.
More than one BED file can be set, up to an arbitrary number:
filter_beds:\n neutral: \"resources/neutral_sites.bed\"\n intergenic: \"resources/intergenic_sites.bed\"\n introns: \"resources/introns.bed\"\n
It may also sometimes be desireable to skip analyses on allsites-filts
, say if you are trying to only generate diversity estimates or generate SFS for a set of neutral sites you supply.
To skip running any analyses for allsites-filts
and only perform them for the BED files you supply, you can set only_filter_beds: true
in the config file. This may also be useful in the event you have a set of already filtered sites, and want to run the workflow on those, ignoring any of the built in filter options by setting them to false
.
These are software specific settings that can be user configured in the workflow. If you are missing a configurable setting you need, open up an issue or a pull request and I'll gladly put it in.
mapQ:
Phred-scaled mapping quality filter. Reads below this threshold will be filtered out. (integer)baseQ:
Phred-scaled base quality filter. Reads below this threshold will be filtered out. (integer)
params:
clipoverlap:
clip_user_provided_bams:
Determines whether overlapping read pairs will be clipped in BAM files supplied by users. This is useful as many variant callers will account for overlapping reads in their processing, but ANGSD will double count overlapping reads. If BAMs were prepped without this in mind, it can be good to apply before running through ANGSD. However, it essentially creates a BAM file of nearly equal size for every sample, so it may be nice to turn off if you don't care for this correction or have already applied it on the BAMs you supply. (true
/false
)genmap:
Parameters for pileup mappability analysis, see GenMap's documentation for more details.K:
E:
map_thresh:
A threshold mappability score. Each site gets an average mappability score taken by averaging the mappability of all K-mers that would overlap it. A score of 1 means all K-mers are uniquely mappable, allowing for e
mismatches. This is doen via a custom script, and may eventually be replaced by the SNPable method, which is more common. (integer/float, 0-1)extreme_depth_filt:
Parameters for excluding sites based on extreme high and/or low global depth. The final sites list will contain only sites that pass the filters for all categories requested (i.e the whole dataset and/or the depth categories set in samples.tsv).method:
Whether you will generate extreme thresholds as a multiple of the median global depth (\"median\"
) or as percentiles of the global depth distribution (\"percentile\"
)bounds:
The bounds of the depth cutoff, defined as a numeric list. For the median method, the values will be multiplied by the median of the distribution to set the thresholds (i.e. [0.5,1.5]
would generate a lower threshold at 0.5*median and an upper at 1.5*median). For the percentile method, these define the lower and upper percentiles to filter out (i.e [0.01,0.99] would remove the lower and upper 1% of the depth distributions). ([ FLOAT, FLOAT]
)filt-on-dataset:
Whether to perform this filter on the dataset as a whole (may want to set to false if your dataset global depth distribution is multi-modal). (true
/false
)filt-on-depth-classes:
Whether to perform this filter on the depth classes defined in the samples.tsv file. This will generate a global depth distribution for samples in the same category, and perform the filtering on these distributions independently. Then, the sites that pass for all the classes will be included. (true
/false
)fastp:
extra:
Additional options to pass to fastp trimming. (string)min_overlap_hist:
Minimum overlap to collapse historical reads. Default in fastp is 30. This effectively overrides the --length_required
option if it is larger than that. (INT)bwa_aln:
extra:
Additional options to pass to bwa aln for mapping of historical sample reads. (string)samtools:
subsampling_seed:
Seed to use when subsampling bams to lower depth. \"$RANDOM\"
can be used to set a random seed, or any integer can be used to set a consistent seed. (string or int)picard:
MarkDuplicates:
Additional options to pass to Picard MarkDuplicates. --REMOVE_DUPLICATES true
is recommended. (string)angsd:
General options in ANGSD, relevant doc pages are linkedgl_model:
Genotype likelihood model to use in calculation (-GL
option in ANGSD, docs)maxdepth:
When calculating individual depth, sites with depth higher than this will be binned to this value. Should be fine for most to leave at 1000
. (integer, docs)mindepthind:
Individuals with sequencing depth below this value at a position will be treated as having no data at that position by ANGSD. ANGSD defaults to 1 for this. Note that this can be separately set for individual heterozygosity estimates with mindepthind_heterozygosity
below. (integer, -setMinDepthInd
option in ANGSD) (INT)minind_dataset:
Used to fill the -minInd
option for any dataset wide ANGSD outputs (like Beagles for PCA/Admix). Should be a floating point value between 0 and 1 describing what proportion of the dataset must have data at a site to include it in the output. (FLOAT)minind_pop:
Used to fill the -minInd
option for any population level ANGSD outputs (like SAFs or Beagles for ngsF-HMM). Should be a floating point value between 0 and 1 describing what proportion of the population must have data at a site to include it in the output. (FLOAT)rmtrans:
Removes transitions using ANGSD, effectively removing them from downstream analyses. This is useful for removing DNA damage from analyses, and will automatically set the appropriate ANGSD flags (i.e. using -noTrans 1
for SAF files and -rmTrans 1
for Beagle files.) (true
/false
)extra:
Additional options to pass to ANGSD during genotype likelihood calculation at all times. This is primarily useful for adding BAM input filters. Note that --remove_bads
and -only_proper_pairs
are enabled by default, so they only need to be included if you want to turn them off or explicitly ensure they are enabled. I've also found that for some datasets, -C 50
and -baq 1
can create a strong relationship between sample depth and detected diversity, effectively removing the benefits of ANGSD for low/variable depth data. I recommend that these aren't included unless you know you need them. Since the workflow uses bwa to map, -uniqueOnly 1
doesn't do anything if your minimum mapping quality is > 0. Mapping and base quality thresholds are also not needed, it will use the ones defined above automatically. If you prefer to correct for historical damage by trimming the ends of reads, this is where you'd want to put -trim INT
. (string) (string, docs)extra_saf:
Same as extra
, but only used when making SAF files (used for SFS, thetas, Fst, IBSrelate, heterozygosity includes invariable sites). Doesn't require options already in extra
or defined via other params in the YAML (such as notrans
, minind
, GL
, etc.) (string)extra_beagle:
Same as extra
, but only used when making Beagle and Maf files (used for PCA, Admix, ngsF-HMM, doIBS, ngsrelate, includes only variable sites). Doesn't require options already in extra
or defined via other params in the YAML (such as rmtrans
, minind
, GL
, etc.) (string)snp_pval:
The p-value to use for calling SNPs (float, docs) (float or string)domajorminor:
Method for inferring the major and minor alleles. Set to 1 to infer from the genotype likelihoods, see documentation for other options. 1
, 2
, and 4
can be set without any additional configuration. 5
must also have an ancestral reference provided in the config, otherwise it will be the same as 4
. 3
is currently not possible, but please open an issue if you have a use case, I'd like to add it, but would need some input on how it is used. (int)domaf:
Method for inferring minor allele frequencies. Set to 1
to infer from genotype likelihoods using a known major and minor from the domajorminor
setting above. See docs for other options. I have not tested much beyond 1
and 8
, please open an issue if you have problems. (int)min_maf:
The minimum minor allele frequency required to call a SNP. This is set when generating the beagle file, so will filter SNPs for PCAngsd, NGSadmix, ngsF-HMM, and NGSrelate. If you would like each tool to handle filtering for maf on its own you can set this to -1
(disabled). (float, docs)mindepthind_heterozygosity:
When estimating individual heterozygosity, sites with sequencing depth lower than this value will be dropped. (integer, -setMinDepthInd
option in ANGSD) (int)ngsld:
Settings for ngsLD (docs)max_kb_dist_est-ld:
For the LD estimates generated when setting estimate_ld: true
above, set the maximum distance between sites in kb that LD will be estimated for (--max_kb_dist
in ngsLD, integer)rnd_sample_est-ld:
For the LD estimates generated when setting estimate_ld: true
above, randomly sample this proportion of pairwise linkage estimates rather than estimating all (--rnd_sample
in ngsLD, float)max_kb_dist_decay:
The same as max_kb_dist_est-ld:
, but used when estimating LD decay when setting ld_decay: true
above (integer)rnd_sample_decay:
The same as rnd_sample_est-ld:
, but used when estimating LD decay when setting ld_decay: true
above (float)fit_LDdecay_extra:
Additional plotting arguments to pass to fit_LDdecay.R
when estimating LD decay (string)fit_LDdecay_n_correction:
When estimating LD decay, should the sample size corrected r^2 model be used? (true
/false
, true
is the equivalent of passing a sample size to fit_LDdecay.R
in ngsLD using --n_ind
)max_kb_dist_pruning_dataset:
The same as max_kb_dist_est-ld:
, but used when linkage pruning SNPs as inputs for PCAngsd, NGSadmix, and NGSrelate analyses. Pruning is performed on the whole dataset. Any positions above this distance will be assumed to be in linkage equilibrium during the pruning process. (integer)pruning_min-weight_dataset:
The minimum r^2 to assume two positions are in linkage disequilibrium when pruning for PCAngsd, NGSadmix, and NGSrelate analyses. (float)ngsf-hmm:
Settings for ngsF-HMMestimate_in_pops:
Set to true
to run ngsF-HMM separately for each population in your dataset. Set to false
to run for whole dataset at once. ngsF-HMM assumes Hardy-Weinberg Equilibrium (aside from inbreeding) in the input data, so select the option that most reflects this in your data. (true
/false
)prune:
Whether or not to prune SNPs for LD before running the analysis. ngsF-HMM assumes independent sites, so it is preferred to set this to true
to satisfy that expectation. (true
/false
)max_kb_dist_pruning_pop:
The maximum distance between sites in kb that will be treated as in LD when pruning for the ngsF-HMM input. (INT)pruning_min-weight_pop:
The minimum r^2 to assume two positions are in linkage disequilibrium when pruning for the ngsF-HMM input. Note, that this likely will be substantially higher for individual populations than for the whole dataset, as background LD should be higher when no substructure is present. (float)min_roh_length:
Minimum ROH size in base pairs to include in inbreeding coefficient calculation. Set if short ROH might be considered low confidence for your data. (integer)roh_bins:
A list of integers that describe the size classes in base pairs you would like to partition the inbreeding coefficient by. This can help visualize how much of the coefficient comes from ROH of certain size classes (and thus, ages). List should be in ascending order and the first entry should be greater than min_roh_length
. The first bin will group ROH between min_roh_length
and the first entry, subsequent bins will group ROH with sizes between adjacent entries in the list, and the final bin will group all ROH larger than the final entry in the list. (list)realSFS:
Settings for realSFSfold:
Whether or not to fold the produced SFS. Set to 1 if you have not provided an ancestral-state reference (0 or 1, docs)sfsboot:
Determines number of bootstrap replicates to use when requesting bootstrapped SFS. Is used for both 1dsfs and 2dsfs (this is very easy to separate, open an issue if desired). Automatically used for heterozygosity analysis to calculate confidence intervals. (integer)fst:
Settings for $F_{ST}$ calculation in ANGSDwhichFst:
Determines which $F_{ST}$ estimator is used by ANGSD. With 0 being the default Reynolds 1983 and 1 being the Bhatia 2013 estimator. The latter is preferable for small or uneven sample sizes (0 or 1, docs)win_size:
Window size in bp for sliding window analysis (integer)win_step:
Window step size in bp for sliding window analysis (integer)thetas:
Settings for pi, theta, and Tajima's D estimationwin_size:
Window size in bp for sliding window analysis (integer)win_step:
Window step size in bp for sliding window analysis (integer)minsites:
Minimum sites to include window in report plot. This does not remove them from the actual output, just the report plot.ngsadmix:
Settings for admixture analysis with NGSadmix. This analysis is performed for a set of K groupings, and each K has several replicates performed. Replicates will continue until a set of N highest likelihood replicates converge, or the number of replicates reaches an upper threshold set here. Defaults for reps
, minreps
, thresh
, and conv
can be left as default for most.kvalues:
A list of values of K to fit the data to (list of integers)reps:
The maximum number of replicates to perform per K. Default is 100. (integer)minreps:
The minimum number of replicates to perform, even if replicates have converged. Default is 20. (integer)thresh:
The convergence threshold - the top replicates must all be within this value of log-likelihood units to consider the run converged. Default is 2. (integer)conv:
The number of top replicates to include in convergence assessment. Default is 3. (integer)extra:
Additional arguments to pass to NGSadmix (for instance, increasing -maxiter
). (string, docs)ibs:
Settings for identity by state calculation with ANGSD-doIBS:
Whether to use a random (1) or consensus (2) base in IBS distance calculation (docs)Note
A tutorial is in progress, but not yet available. The pipeline can still be used by following the rest of the guide.
A tutorial is available with a small(ish) dataset where biologically meaningful results can be produced. This can help get an understanding of a good workflow to use different modules. You can also follow along with your own data and just skip analyses you don't want. If you prefer to just jump in instead, below describes how to quickly get a new project up and running.
"},{"location":"getting-started/#requirements","title":"Requirements","text":"This pipeline can be run on Linux systems with Conda and Apptainer/Singularity installed. All other dependencies will be handled with the workflow, and thus, sufficient storage space is needed for these installations (~10GB, but this needs verification). It can be run on a local workstation with sufficient resources and storage space (dataset dependent), but is aimed at execution on high performance computing systems with job queuing systems.
Data-wise, you'll need a reference genome (uncompressed) and some sequencing data for your samples. The latter can be either raw fastq files, bam alignments to the reference, or accession numbers for already published fastq files.
"},{"location":"getting-started/#deploying-the-workflow","title":"Deploying the workflow","text":"The pipeline can be deployed in two ways: (1) using Snakedeploy which will deploy the pipeline as a module (recommended); (2) clone the repository at the version/branch you prefer (recommended if you will change any workflow code).
Both methods require a Snakemake environment to run the pipeline in.
"},{"location":"getting-started/#preparing-the-environment","title":"Preparing the environment","text":"First, create an environment for Snakemake, including Snakedeploy if you intend to deploy that way:
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy\n
If you already have a Snakemake environment, you can use that, so long as it has snakemake
(not just snakemake-minimal
) installed. Snakemake versions >=7.25 are likely to work, but most testing is on 7.32.4. It is compatible with Snakemake v8, but you may need to install additional plugins for cluster execution due to the new executor plugin system. See the Snakemake docs for what additional executor plugin you might need to enable cluster execution for your system.
Activate the Snakemake environment:
conda activate snakemake\n
"},{"location":"getting-started/#deploying-with-snakedeploy","title":"Deploying with Snakedeploy","text":"Make your working directory:
mkdir -p /path/to/work-dir\ncd /path/to/work-dir\n
And deploy the workflow, using the tag for the version you want to deploy:
snakedeploy deploy-workflow https://github.com/zjnolen/PopGLen . --tag v0.2.0\n
This will generate a simple Snakefile in a workflow
folder that loads the pipeline as a module. It will also download the template config.yaml
, samples.tsv
, and units.tsv
in the config
folder.
Go to the folder you would like you working directory to be created in and clone the GitHub repo:
git clone https://github.com/zjnolen/PopGLen.git\n
If you would like, you can change the name of the directory:
mv PopGLen work-dir-name\n
Move into the working directory (PopGLen
or work-dir-name
if you changed it) and checkout the version you would like to use:
git checkout v0.2.0\n
This can also be used to checkout specific branches or commits.
"},{"location":"getting-started/#configuring-the-workflow","title":"Configuring the workflow","text":"Now you are ready to configure the workflow, see the documentation for that here.
"},{"location":"high-memory-rules/","title":"Rules using large amounts of RAM","text":"NOTE: This is a work in progress list. Trying to figure out what
The biggest challenge with using this pipeline with other datasets is ensuring RAM is properly allocated. Many rules require very little RAM, and so the default allocations that come on your cluster per thread will likely do fine. However, some rules require considerably more RAM. These are:
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to the documentation for PopGLen","text":"PopGLen is aimed at enabling users to run population genomic analyses on their data within a genotype likelihood framework in an automated and reproducible fashion. Genotype likelihood based analyses avoid genotype calling, instead performing analyses on the likelihoods of each possible genotype, incorporating uncertainty about the true genotype into the analysis. This makes them especially suited for datasets with low coverage or that vary in coverage.
This pipeline was developed in large part to make my own analyses easier. I work with many species being mapped to their own references within the same project. I developed this pipeline so that I could ensure standardized processing for datasets within the same project and to automate the many steps that go into performing these analyses. As it needed to fit many datasets, it is generalizable and customizable through a single configuration file and uses a common workflow utilized by ANGSD users, so it is available for others to use, should it suit their needs.
Questions? Feature requests? Just ask!I'm glad to answer questions on the GitHub Issues page for the project, as well as take suggestions for features or improvements!
"},{"location":"#pipeline-summary","title":"Pipeline Summary","text":"The pipeline aims to follow the general path many users will use when working with ANGSD and other GL based tools. Raw equencing data is processed into BAM files (with optional configuration for historical degraded samples) or BAM files are provided directly. From there several quality control reports are generated to help determine what samples are included. The pipeline then builds a 'sites' file to perform analyses with. This sites file is made from several user-configured filters, intersecting all and outputing a list of sites for the analyses to be performed on across all samples. This can also be extended by user-provided filter lists (e.g. to limit to neutral sites, genic regions, etc.).
After samples have been processed, quality control reports produced, and the sites file has been produced, the pipeline can continue to the analyses.
These all can be enabled and processed independently, and the pipeline will generate genotype likelihood input files using ANGSD and share them across analyses as appropriate, deleting temporary intermediate files when they are no longer needed.
At any point after a successful completion of a portion of the pipeline, a report can be made that contains tables and figures summarizing the results for the currently enabled parts of the pipeline.
If you're interested in using this, head to the Getting Started page!
"},{"location":"config/","title":"Configuring the workflow","text":"Running the workflow requires configuring three files: config.yaml
, samples.tsv
, and units.tsv
. config.yaml
is used to configure the analyses, samples.tsv
categorizes your samples into groups, and units.tsv
connects sample names to their input data files. The workflow will use config/config.yaml
automatically, but you can name this whatever you want (good for separating datasets in same working directory) and point to it when running snakemake with --configfile <path>
.
samples.tsv
","text":"This file contains your sample list, and has four tab separated columns:
sample\tpopulation\ttime\tdepth\nhist1\tHjelmseryd\thistorical\tlow\nhist2\tHjelmseryd\thistorical\tlow\nhist3\tHjelmseryd\thistorical\tlow\nmod1\tGotafors\tmodern\thigh\nmod2\tGotafors\tmodern\thigh\nmod3\tGotafors\tmodern\thigh\n
sample
contains the ID of a sample. It is best if it only contains alphanumeric characters.
population
contains the population the sample comes from and will be used to group samples for population-level analyses.
time
sets whether a sample should be treated as fresh DNA or historical DNA in the sequence processing workflow. Doesn't change anything if you're starting with bam files.
depth
puts the sample in a sequencing depth category. Used for filtering - if enabled in the configuration, extreme depth filters will be performed for depth categories individually.
units.tsv
","text":"This file connects your samples to input files and has a potential for eight tab separated columns:
sample\tunit\tlib\tplatform\tfq1\tfq2\tbam\tsra\nhist1\tBHVN22DSX2.2\thist1\tILLUMINA\tdata/fastq/hist1.r1.fastq.gz\tdata/fastq/hist1.r2.fastq.gz\t\nhist1\tBHVN22DSX2.3\thist1\tILLUMINA\tdata/fastq/hist1.unit2.r1.fastq.gz\tdata/fastq/hist1.unit2.r2.fastq.gz\t\nhist2\tBHVN22DSX2.2\thist2\tILLUMINA\tdata/fastq/hist2.r1.fastq.gz\tdata/fastq/hist2.r2.fastq.gz\t\nhist3\tBHVN22DSX2.2\thist2\tILLUMINA\tdata/fastq/hist3.r1.fastq.gz\tdata/fastq/hist3.r2.fastq.gz\t\nmod1\tAHW5NGDSX2.3\tmod1\tILLUMINA\tdata/fastq/mod1.r1.fastq.gz\tdata/fastq/mod1.r2.fastq.gz\t\nmod2\tAHW5NGDSX2.3\tmod2\tILLUMINA\t\t\tdata/bam/mod2.bam\nmod3\tAHW5NGDSX2.3\tmod3\tILLUMINA\tdata/fastq/mod3.r1.fastq.gz\tdata/fastq/mod3.r2.fastq.gz\t\nSAMN13218652\tSRR10398077\tSAMN13218652\tILLUMINA\t\t\t\tSRR10398077\n
sample
contains the ID of a sample. Must be same as in samples.tsv
and may be listed multiple times when inputting multiple sequencing runs/libraries.unit
contains the sequencing unit, i.e. the sequencing lane barcode and lane number. This is used in the PU and (part of) the ID read groups. If you don't have multiple sequencing lanes per samples, this won't impact anything. Doesn't do anything when using bam input.lib
contains the name of the library identifier for the entry. Fills in the LB and (part of) the ID read groups and is used for PCR duplicate removal. Best practice would be to have the combination of unit
and lib
to be unique per line. An easy way to use this is to use the Illumina library identifier or another unique library identifier, or simply combine a generic name with the sample name (sample1A, sample1B, etc.). Doesn't do anything when using bam input.platform
is used to fill the PL read group. Commonly is just 'ILLUMINA'. Doesn't do anything when using bam input.fq1
and fq2
provides the absolute or relative to the working directory paths to the raw sequencing files corresponding to the metadata in the previous columns.bam
provides the absolute or relative to the working directory path of pre-processed bam files. Only one bam files should be provided per sample in the units file.sra
provides the NCBI SRA accession number for a set of paired end fastq files that will be downloaded to be processed. If a sample has multiple runs you would like to include, each run should be its own line in the units sheet, just as separate sequencing runs would be.Mixing samples with different starting points
It is possible to have different samples start from different inputs (i.e. some from bam, others from fastq, others from SRA). It is best to provide only fq1
+fq2
, bam
, or sra
for each sample to be clear where each sample starts. If multiple are provided for the same sample, the bam file will override fastq or SRA entries, and the fastq will override SRA entries. Note that this means it is not currently possible to have multiple starting points for the same sample (i.e. FASTQ reads that would be processed then merged into an existing BAM).
config.yaml
contains the configuration for the workflow, this is where you will put what analyses, filters, and options you want. Below I describe the configuration options. The config.yaml
in this repository serves as a template, but includes some 'default' parameters that may be good starting points for some users. If --configfile
is not specified in the snakemake command, the workflow will default to config/config.yaml
.
Required configuration of the 'dataset'.
samples:
An absolute or relative path from the working directory to the samples.tsv
file.units:
An absolute or relative path from the working directory to the units.tsv
file.dataset:
A name for this dataset run - essentially, an identifier for a batch of samples to be analysed together with the same configuration.Here, dataset means a set of samples and configurations that the workflow will be run with. Each dataset should have its own samples.tsv
and config.yaml
, but the same units.tsv
can be used for multiple if you prefer. Essentially, what the dataset identifier does is keeps your outputs organized into projects, so that the same BAM files can be used in multiple datasets without having to be remade.
So, say you have dataset1_samples.tsv
and dataset2_samples.tsv
, with corresponding dataset1_config.tsv
and dataset2_config.yaml
. The sample files contain different samples, though some are shared between the datasets. The workflow for dataset1 can be run, and then dataset2 can be run. When dataset2 runs, it map new samples, but won't re-map samples processed in dataset1. Each will perform downstream analyses independently with their sample set and configuration files, storing these results in dataset specific folders.
Required configuration of the reference.
chunk_size:
A size in bp (integer). Your reference will be analyzed in 'chunks' of contigs of this size to parallelize processing. This size should be larger than the largest contig in your genome. A larger number means fewer jobs that run longer. A smaller number means more jobs that run shorter. The best fit will depend on the reference and the compute resources you have available. Leaving this blank will not divide the reference up into chunks (but this isn't optimized yet, so it will do a couple unnecessary steps).
reference:
name:
A name for your reference genome, will go in the file names.fasta:
A path to the reference fasta file (currently only supports uncompressed fasta files)mito:
Mitochondrial contig name(s), will be removed from analysis. Should be listed within brackets []sex-linked:
Sex-linked contig name(s), will be removed from analysis. Should be listed within brackets []exclude:
Additional contig name(s) to exclude from analysis. Should be listed within brackets []min_size:
A size in bp (integer). All contigs below this size will be excluded from analysis.
ancestral:
A path to a fasta file containing the ancestral states in your reference genome. This is optional, and is used to polarize allele frequencies in SAF files to ancestral/derived. If you leave this empty, the reference genome itself will be used as ancestral, and you should be sure the [params
] [realSFS
] [fold
] is set to 1
. If you put a fasta here, you can set that to 0
.
Reference genomes should be uncompressed, and contig names should be clear and concise. Currently, there are some issues parsing contig names with underscores, so please change these in your reference before running the pipeline. Alphanumeric characters, as well as .
in contig names have been tested to work so far, other symbols have not been tested.
Potentially the ability to use bgzipped genomes will be added, I just need to check that it works with all underlying tools. Currently, it will for sure not work, as calculating chunks is hard-coded to work on an uncompressed genome.
"},{"location":"config/#sample-set-configuration","title":"Sample Set Configuration","text":"exclude_ind:
Sample name(s) that will be excluded from the workflow. Should be a list in []. Putting a #
in front of the sample in the sample list also works. Mainly used to drop samples with poor quality after initial processing.excl_pca-admix:
Sample name(s) that will be excluded only from PCA and Admixture analyses. Useful for close relatives that violate the assumptions of these analyses, but that you want in others. Should be a list in []. If you want relatives out of all downstream analyses, not just PCA/Admix, put them in exclude_ind
instead. Note this will trigger a re-run for relatedness analyses, but you can just disable them now as they've already been run.Here, you will define which analyses you will perform. It is useful to start with only a few, and add more in subsequent workflow runs, just to ensure you catch errors before you use compute time running all analyses. Most are set with (true
/false
) or a value, described below. Modifications to the settings for each analysis are set in the next section.
populations:
A list of populations found in your sample list to limit population analyses to. Might be useful if you want to perform individual analyses on some samples but not include them in any population level analyses. Leave blank ([]
) if you want population level analyses on all the populations defined in your samples.tsv
file.
analyses:
mapping:
historical_only_collapsed:
Historical samples are expected to have fragmented DNA. For this reason, overlapping (i.e. shorter, usually <270bp) read pairs are collapsed in this workflow for historical samples. Setting this option to true
will only map only these collapsed reads, and is recommended to target primarily endogenous content. However, in the event you want to map both the collapsed and uncollapsed reads, you can set this to false
. (true
/false
)historical_collapsed_aligner:
Aligner used to map collapsed historical sample reads. aln
is the recommended for this, but this is here in case you would like to select mem
for this. Uncollapsed historical reads will be mapped with mem
if historical_only_collapsed
is set to false
, regardless of what is put here. (aln
/mem
)pileup-mappability:
Filter out sites with low 'pileup mappability', which describes how uniquely fragments of a given size can map to the reference (true
/false
)repeatmasker:
(NOTE: Only one of the four options should be filled/true)bed:
Supply a path to a bed file that contains regions with repeats. This is for those who want to filter out repetitive content, but don't need to run Repeatmodeler or masker in the workflow because it has already been done for the genome you're using. Be sure the contig names in the bed file match those in the reference supplied. GFF or other filetypes that work with bedtools subtract
may also work, but haven't been tested.local_lib:
Filter repeats by masking with an already made library you have locally (such as ones downloaded for Darwin Tree of Life genomes). Should be file path, not a URL.dfam_lib:
Filter repeats using a library available from dfam. Should be a taxonomic group name.build_lib:
Use RepeatModeler to build a library of repeats from the reference itself, then filter them from analysis (true
/false
).extreme_depth:
Filter out sites with extremely high or low global sequencing depth. Set the parameters for this filtering in the params
section of the yaml. (true
/false
)dataset_missing_data:
A floating point value between 0 and 1. Sites with data for fewer than this proportion of individuals across the whole dataset will be filtered out in all analyses using the filtered sites file. (This is only needed if you need to ensure all your populations are using exactly the same sites, which I find may result in coverage biases in results, especially heterozygosity. Unless you explicitly need to ensure all groups and analyses use the same sites, I would leave this blank, instead using the [params
][angsd
][minind_pop
] to set a minimum individual threshold for each analyses, allowing analyses to maximize sites per group/sample. This is how most papers do it.)population_missing_data:
A floating point value between 0 and 1. Sites with data for fewer than this proportion of individuals in any population will be filtered out in all populations using the filtered sites file. (This is only needed if you need to ensure all your populations are using exactly the same sites, which I find may result in coverage biases in results, especially heterozygosity. Unless you explicitly need to ensure all groups and analyses use the same sites, I would leave this blank, instead using the [params
][angsd
][minind_pop
] to set a minimum individual threshold for each analyses, allowing analyses to maximize sites per group/sample. This is how most papers do it.)qualimap:
Perform Qualimap bamqc on bam files for general quality stats (true
/false
)ibs_ref_bias:
Enable reference bias calculation. For each sample, one read is randomly sampled at each position and compared to the reference base. These are summarized as the proportion of the genome that is identical by state to the reference for each sample to quantify reference bias. This is done for all filter sets as well as for all sites without site filtering. If transition removal or other arguments are passed to ANGSD, they are included here. (true
/false
)damageprofiler:
Estimate post-mortem DNA damage on historical samples with Damageprofiler (true
/false
) NOTE: This just adds the addition of Damageprofiler to the already default output of MapDamage.mapdamage_rescale:
Rescale base quality scores using MapDamage2 to help account for post-mortem damage in analyses (if you only want to assess damage, use damageprofiler instead, they return the same results) (true
/false
) docsestimate_ld:
Estimate pairwise linkage disquilibrium between sites with ngsLD for each popualation and the whole dataset. Note, only set this if you want to generate the LD estimates for use in downstream analyses outside this workflow. Other analyses within this workflow that require LD estimates (LD decay/pruning) will function properly regardless of the setting here. (true
/false
)ld_decay:
Use ngsLD to plot LD decay curves for each population and for the dataset as a whole (true
/false
)pca_pcangsd:
Perform Principal Component Analysis with PCAngsd. Currently requires at least 4 samples to finish, as it will by default try to plot PCs1-4. (true
/false
)admix_ngsadmix:
Perform admixture analysis with NGSadmix (true
/false
)relatedness:
Can be performed multiple ways, set any combination of the three options below. Note, that I've mostly incorporated these with the R0/R1/KING kinship methods in Waples et al. 2019, Mol. Ecol. in mind. These methods differ slightly from how they implement this method, and will give slightly more/less accurate estimates of kinship depending on your reference's relationship to your samples. ibsrelate_ibs
uses the probabilities of all possible genotypes, so should be the most accurate regardless, but can use a lot of memory and take a long time with many samples. ibsrelate_sfs
is a bit more efficient, as it does things in a pairwise fashion in parallel, but may be biased if the segregating alleles in your populations are not represented in the reference. ngsrelate
uses several methods, one of which is similar to ibsrelate_sfs
, but may be less accurate due to incorporating in less data. In my experience, NGSrelate is suitable to identify up to third degree relatives in the dataset, but only if the exact relationship can be somewhat uncertain (i.e. you don't need to know the difference between, say, parent/offspring and full sibling pairs, or between second degree and third degree relatives). IBSrelate_sfs can get you greater accuracy, but may erroneously inflate kinship if your datset has many alleles not represented in your reference. If you notice, for instance, a large number of third degree relatives (KING ~0.03 - 0.07) in your dataset that is not expected, it may be worth trying the IBS based method (ibsrelate_ibs
).ngsrelate:
Co-estimate inbreeding and pairwise relatedness with NGSrelate (true
/false
)ibsrelate_ibs:
Estimate pairwise relatedness with the IBS based method from Waples et al. 2019, Mol. Ecol.. This can use a lot of memory, as it has genotype likelihoods for all sites from all samples loaded into memory, so it is done per 'chunk', which still takes a lot of time and memory. (true
/false
)ibsrelate_sfs:
Estimate pairwise relatedness with the SFS based method from Waples et al. 2019, Mol. Ecol.. Enabling this can greatly increase the time needed to build the workflow DAG if you have many samples. As a form of this method is implemented in NGSrelate, it may be more efficient to only enable that. (true
/false
)1dsfs:
Generates a one dimensional site frequency spectrum for all populations in the sample list. Automatically enabled if thetas_angsd
is enabled. (true
/false
)1dsfs_boot:
Generates N bootstrap replicates of the 1D site frequency spectrum for each population. N is determined from the sfsboot
setting below (true
/false
)2dsfs:
Generates a two dimensional site frequency spectrum for all unique populations pairings in the sample list. Automatically enabled if fst_angsd
is enabled. (true
/false
)2dsfs_boot:
Generates N bootstrap replicates of the 2D site frequency spectrum for each population pair. N is determined from the sfsboot
setting below (true
/false
)thetas_angsd:
Estimate pi, theta, and Tajima's D for each population in windows across the genome using ANGSD (true
/false
)heterozygosity_angsd:
Estimate individual genome-wide heterozygosity using ANGSD. Calculates confidence intervals from bootstraps. (true
/false
)fst_angsd:
Estimate pairwise $F_{ST}$ using ANGSD. Set one or both of the below options. Estimates both globally and in windows across the genome.populations:
Pairwise $F_{ST}$ is calculated between all possible population pairs (true
/false
)individuals:
Pairwise $F_{ST}$ is calculated between all possible population pairs. NOTE: This can be really intensive on the DAG building process, so I don't recommend enabling unless you're certain you want this (true
/false
)inbreeding_ngsf-hmm:
Estimates inbreeding coefficients and runs of homozygosity using ngsF-HMM. Output is converted into an inbreeding measure $F_ROH$, which describes the proportion of the genome in runs of homozygosity over a certain length. (true
/false
)ibs_matrix:
Estimate pairwise identity by state distance between all samples using ANGSD. (true
/false
)As this workflow is aimed at low coverage samples, its likely there might be considerable variance in sample depth. For this reason, it may be good to subsample all your samples to a similar depth to examine if variation in depth is influencing results. To do this, set an integer value here to subsample all your samples down to and run specific analyses. This subsampling can be done in reference to the unfiltered sequencing depth, the mapping and base quality filtered sequencing depth, or the filtered sites sequencing depth. The latter is recommended, as this will ensure that sequencing depth is made uniform at the analysis stage, as it is these filtered sites that analyses are performed for.
subsample_dp:
A mean depth to subsample your reads to. This will be done per sample, and subsample from all the reads. If a sample already has the same, or lower, depth than this number, it will just be used as is in the analysis. (INT)subsample_by:
This determines how the 'full' sequencing depth of a sample is calculated to determine the amount of subsampling needed to reach the target depth. This should be one of three options: (1) \"unfilt\"
will treat the sequencing depth of all (unfiltered) reads and sites as the 'full' depth; (2) \"mapqbaseq\"
will filter out reads that don't pass the configured mapping or base quality, then calculate depth across all sites as the 'full' depth, (3) \"sitefilt\"
will filter reads justa as \"mapqbaseq\"
does, but will only calculate the 'full' depth from sites passing the sites filter. As the main goal of subsampling is to make depth uniform for analyses, this latter option is preferred, as it will most accurately bring the depth of the samples to the target depth for analyses. (\"unfilt\"
/\"mapqbaseq\"
/\"sitefilt\"
)redo_depth_filts
: If subsample_by
is set to \"unfilt\"
or \"mapqbaseq\"
, then it would be possible to recaculate extreme depth filters for the subsampled dataset. Enable this to do so, otherwise, the depth filters from the full depth bams will be used. If subsample_by
is set to \"sitefilt\"
this will have no effect, as the subsampling is already in reference to a set site list. (true
/false
)drop_samples
: When performing depth subsampling, you may want to leave some samples out that you kept in your 'full' dataset. These can be listed here and they will be removed from ALL depth subsampled analyses. A use case for this might be if you have a couple samples that are below your targeted subsample depth, and you don't want to include them. (list of strings: []
)subsample_analyses:
Individually enable analyses to be performed with the subsampled data. These are the same as the ones above in the analyses section. Enabling here will only run the analysis for the subsampled data, if you want to run it for the full data as well, you need to enable it in the analyses section as well. (true
/false
)By default, this workflow will perform all analyses requested in the above section on all sites that pass the filters set in the above section. These outputs will contain allsites-filts
in the filename and in the report. However, many times, it is useful to perform an analysis on different subsets of sites, for instance, to compare results for genic vs. intergenic regions, neutral sites, exons vs. introns, etc. Here, users can set an arbitrary number of additional filters using BED files. For each BED file supplied, the contents will be intersected with the sites passing the filters set in the above section, and all analyses will be performed additionally using those sites.
For instance, given a BED file containing putatively neutral sites, one could set the following:
filter_beds:\n neutral-sites: \"resources/neutral_sites.bed\"\n
In this case, for each requested analysis, in addition to the allsites-filts
output, a neutral-filts
(named after the key assigned to the BED file in config.yaml
) output will also be generated, containing the results for sites within the specified BED file that passed any set filters.
More than one BED file can be set, up to an arbitrary number:
filter_beds:\n neutral: \"resources/neutral_sites.bed\"\n intergenic: \"resources/intergenic_sites.bed\"\n introns: \"resources/introns.bed\"\n
It may also sometimes be desireable to skip analyses on allsites-filts
, say if you are trying to only generate diversity estimates or generate SFS for a set of neutral sites you supply.
To skip running any analyses for allsites-filts
and only perform them for the BED files you supply, you can set only_filter_beds: true
in the config file. This may also be useful in the event you have a set of already filtered sites, and want to run the workflow on those, ignoring any of the built in filter options by setting them to false
.
These are software specific settings that can be user configured in the workflow. If you are missing a configurable setting you need, open up an issue or a pull request and I'll gladly put it in.
mapQ:
Phred-scaled mapping quality filter. Reads below this threshold will be filtered out. (integer)baseQ:
Phred-scaled base quality filter. Reads below this threshold will be filtered out. (integer)
params:
clipoverlap:
clip_user_provided_bams:
Determines whether overlapping read pairs will be clipped in BAM files supplied by users. This is useful as many variant callers will account for overlapping reads in their processing, but ANGSD will double count overlapping reads. If BAMs were prepped without this in mind, it can be good to apply before running through ANGSD. However, it essentially creates a BAM file of nearly equal size for every sample, so it may be nice to turn off if you don't care for this correction or have already applied it on the BAMs you supply. (true
/false
)genmap:
Parameters for pileup mappability analysis, see GenMap's documentation for more details.K:
E:
map_thresh:
A threshold mappability score. Each site gets an average mappability score taken by averaging the mappability of all K-mers that would overlap it. A score of 1 means all K-mers are uniquely mappable, allowing for e
mismatches. This is doen via a custom script, and may eventually be replaced by the SNPable method, which is more common. (integer/float, 0-1)extreme_depth_filt:
Parameters for excluding sites based on extreme high and/or low global depth. The final sites list will contain only sites that pass the filters for all categories requested (i.e the whole dataset and/or the depth categories set in samples.tsv).method:
Whether you will generate extreme thresholds as a multiple of the median global depth (\"median\"
) or as percentiles of the global depth distribution (\"percentile\"
)bounds:
The bounds of the depth cutoff, defined as a numeric list. For the median method, the values will be multiplied by the median of the distribution to set the thresholds (i.e. [0.5,1.5]
would generate a lower threshold at 0.5*median and an upper at 1.5*median). For the percentile method, these define the lower and upper percentiles to filter out (i.e [0.01,0.99] would remove the lower and upper 1% of the depth distributions). ([ FLOAT, FLOAT]
)filt-on-dataset:
Whether to perform this filter on the dataset as a whole (may want to set to false if your dataset global depth distribution is multi-modal). (true
/false
)filt-on-depth-classes:
Whether to perform this filter on the depth classes defined in the samples.tsv file. This will generate a global depth distribution for samples in the same category, and perform the filtering on these distributions independently. Then, the sites that pass for all the classes will be included. (true
/false
)fastp:
extra:
Additional options to pass to fastp trimming. (string)min_overlap_hist:
Minimum overlap to collapse historical reads. Default in fastp is 30. This effectively overrides the --length_required
option if it is larger than that. (INT)bwa_aln:
extra:
Additional options to pass to bwa aln for mapping of historical sample reads. (string)samtools:
subsampling_seed:
Seed to use when subsampling bams to lower depth. \"$RANDOM\"
can be used to set a random seed, or any integer can be used to set a consistent seed. (string or int)picard:
MarkDuplicates:
Additional options to pass to Picard MarkDuplicates. --REMOVE_DUPLICATES true
is recommended. (string)angsd:
General options in ANGSD, relevant doc pages are linkedgl_model:
Genotype likelihood model to use in calculation (-GL
option in ANGSD, docs)maxdepth:
When calculating individual depth, sites with depth higher than this will be binned to this value. Should be fine for most to leave at 1000
. (integer, docs)mindepthind:
Individuals with sequencing depth below this value at a position will be treated as having no data at that position by ANGSD. ANGSD defaults to 1 for this. Note that this can be separately set for individual heterozygosity estimates with mindepthind_heterozygosity
below. (integer, -setMinDepthInd
option in ANGSD) (INT)minind_dataset:
Used to fill the -minInd
option for any dataset wide ANGSD outputs (like Beagles for PCA/Admix). Should be a floating point value between 0 and 1 describing what proportion of the dataset must have data at a site to include it in the output. (FLOAT)minind_pop:
Used to fill the -minInd
option for any population level ANGSD outputs (like SAFs or Beagles for ngsF-HMM). Should be a floating point value between 0 and 1 describing what proportion of the population must have data at a site to include it in the output. (FLOAT)rmtrans:
Removes transitions using ANGSD, effectively removing them from downstream analyses. This is useful for removing DNA damage from analyses, and will automatically set the appropriate ANGSD flags (i.e. using -noTrans 1
for SAF files and -rmTrans 1
for Beagle files.) (true
/false
)extra:
Additional options to pass to ANGSD during genotype likelihood calculation at all times. This is primarily useful for adding BAM input filters. Note that --remove_bads
and -only_proper_pairs
are enabled by default, so they only need to be included if you want to turn them off or explicitly ensure they are enabled. I've also found that for some datasets, -C 50
and -baq 1
can create a strong relationship between sample depth and detected diversity, effectively removing the benefits of ANGSD for low/variable depth data. I recommend that these aren't included unless you know you need them. Since the workflow uses bwa to map, -uniqueOnly 1
doesn't do anything if your minimum mapping quality is > 0. Mapping and base quality thresholds are also not needed, it will use the ones defined above automatically. If you prefer to correct for historical damage by trimming the ends of reads, this is where you'd want to put -trim INT
. (string) (string, docs)extra_saf:
Same as extra
, but only used when making SAF files (used for SFS, thetas, Fst, IBSrelate, heterozygosity includes invariable sites). Doesn't require options already in extra
or defined via other params in the YAML (such as notrans
, minind
, GL
, etc.) (string)extra_beagle:
Same as extra
, but only used when making Beagle and Maf files (used for PCA, Admix, ngsF-HMM, doIBS, ngsrelate, includes only variable sites). Doesn't require options already in extra
or defined via other params in the YAML (such as rmtrans
, minind
, GL
, etc.) (string)snp_pval:
The p-value to use for calling SNPs (float, docs) (float or string)domajorminor:
Method for inferring the major and minor alleles. Set to 1 to infer from the genotype likelihoods, see documentation for other options. 1
, 2
, and 4
can be set without any additional configuration. 5
must also have an ancestral reference provided in the config, otherwise it will be the same as 4
. 3
is currently not possible, but please open an issue if you have a use case, I'd like to add it, but would need some input on how it is used. (int)domaf:
Method for inferring minor allele frequencies. Set to 1
to infer from genotype likelihoods using a known major and minor from the domajorminor
setting above. See docs for other options. I have not tested much beyond 1
and 8
, please open an issue if you have problems. (int)min_maf:
The minimum minor allele frequency required to call a SNP. This is set when generating the beagle file, so will filter SNPs for PCAngsd, NGSadmix, ngsF-HMM, and NGSrelate. If you would like each tool to handle filtering for maf on its own you can set this to -1
(disabled). (float, docs)mindepthind_heterozygosity:
When estimating individual heterozygosity, sites with sequencing depth lower than this value will be dropped. (integer, -setMinDepthInd
option in ANGSD) (int)ngsld:
Settings for ngsLD (docs)max_kb_dist_est-ld:
For the LD estimates generated when setting estimate_ld: true
above, set the maximum distance between sites in kb that LD will be estimated for (--max_kb_dist
in ngsLD, integer)rnd_sample_est-ld:
For the LD estimates generated when setting estimate_ld: true
above, randomly sample this proportion of pairwise linkage estimates rather than estimating all (--rnd_sample
in ngsLD, float)max_kb_dist_decay:
The same as max_kb_dist_est-ld:
, but used when estimating LD decay when setting ld_decay: true
above (integer)rnd_sample_decay:
The same as rnd_sample_est-ld:
, but used when estimating LD decay when setting ld_decay: true
above (float)fit_LDdecay_extra:
Additional plotting arguments to pass to fit_LDdecay.R
when estimating LD decay (string)fit_LDdecay_n_correction:
When estimating LD decay, should the sample size corrected r^2 model be used? (true
/false
, true
is the equivalent of passing a sample size to fit_LDdecay.R
in ngsLD using --n_ind
)max_kb_dist_pruning_dataset:
The same as max_kb_dist_est-ld:
, but used when linkage pruning SNPs as inputs for PCAngsd, NGSadmix, and NGSrelate analyses. Pruning is performed on the whole dataset. Any positions above this distance will be assumed to be in linkage equilibrium during the pruning process. (integer)pruning_min-weight_dataset:
The minimum r^2 to assume two positions are in linkage disequilibrium when pruning for PCAngsd, NGSadmix, and NGSrelate analyses. (float)ngsf-hmm:
Settings for ngsF-HMMestimate_in_pops:
Set to true
to run ngsF-HMM separately for each population in your dataset. Set to false
to run for whole dataset at once. ngsF-HMM assumes Hardy-Weinberg Equilibrium (aside from inbreeding) in the input data, so select the option that most reflects this in your data. (true
/false
)prune:
Whether or not to prune SNPs for LD before running the analysis. ngsF-HMM assumes independent sites, so it is preferred to set this to true
to satisfy that expectation. (true
/false
)max_kb_dist_pruning_pop:
The maximum distance between sites in kb that will be treated as in LD when pruning for the ngsF-HMM input. (INT)pruning_min-weight_pop:
The minimum r^2 to assume two positions are in linkage disequilibrium when pruning for the ngsF-HMM input. Note, that this likely will be substantially higher for individual populations than for the whole dataset, as background LD should be higher when no substructure is present. (float)min_roh_length:
Minimum ROH size in base pairs to include in inbreeding coefficient calculation. Set if short ROH might be considered low confidence for your data. (integer)roh_bins:
A list of integers that describe the size classes in base pairs you would like to partition the inbreeding coefficient by. This can help visualize how much of the coefficient comes from ROH of certain size classes (and thus, ages). List should be in ascending order and the first entry should be greater than min_roh_length
. The first bin will group ROH between min_roh_length
and the first entry, subsequent bins will group ROH with sizes between adjacent entries in the list, and the final bin will group all ROH larger than the final entry in the list. (list)realSFS:
Settings for realSFSfold:
Whether or not to fold the produced SFS. Set to 1 if you have not provided an ancestral-state reference (0 or 1, docs)sfsboot:
Determines number of bootstrap replicates to use when requesting bootstrapped SFS. Is used for both 1dsfs and 2dsfs (this is very easy to separate, open an issue if desired). Automatically used for heterozygosity analysis to calculate confidence intervals. (integer)fst:
Settings for $F_{ST}$ calculation in ANGSDwhichFst:
Determines which $F_{ST}$ estimator is used by ANGSD. With 0 being the default Reynolds 1983 and 1 being the Bhatia 2013 estimator. The latter is preferable for small or uneven sample sizes (0 or 1, docs)win_size:
Window size in bp for sliding window analysis (integer)win_step:
Window step size in bp for sliding window analysis (integer)thetas:
Settings for pi, theta, and Tajima's D estimationwin_size:
Window size in bp for sliding window analysis (integer)win_step:
Window step size in bp for sliding window analysis (integer)minsites:
Minimum sites to include window in report plot. This does not remove them from the actual output, just the report plot.ngsadmix:
Settings for admixture analysis with NGSadmix. This analysis is performed for a set of K groupings, and each K has several replicates performed. Replicates will continue until a set of N highest likelihood replicates converge, or the number of replicates reaches an upper threshold set here. Defaults for reps
, minreps
, thresh
, and conv
can be left as default for most.kvalues:
A list of values of K to fit the data to (list of integers)reps:
The maximum number of replicates to perform per K. Default is 100. (integer)minreps:
The minimum number of replicates to perform, even if replicates have converged. Default is 20. (integer)thresh:
The convergence threshold - the top replicates must all be within this value of log-likelihood units to consider the run converged. Default is 2. (integer)conv:
The number of top replicates to include in convergence assessment. Default is 3. (integer)extra:
Additional arguments to pass to NGSadmix (for instance, increasing -maxiter
). (string, docs)ibs:
Settings for identity by state calculation with ANGSD-doIBS:
Whether to use a random (1) or consensus (2) base in IBS distance calculation (docs)Note
A tutorial is in progress, but not yet available. The pipeline can still be used by following the rest of the guide.
A tutorial is available with a small(ish) dataset where biologically meaningful results can be produced. This can help get an understanding of a good workflow to use different modules. You can also follow along with your own data and just skip analyses you don't want. If you prefer to just jump in instead, below describes how to quickly get a new project up and running.
"},{"location":"getting-started/#requirements","title":"Requirements","text":"This pipeline can be run on Linux systems with Conda and Apptainer/Singularity installed. All other dependencies will be handled with the workflow, and thus, sufficient storage space is needed for these installations (~10GB, but this needs verification). It can be run on a local workstation with sufficient resources and storage space (dataset dependent), but is aimed at execution on high performance computing systems with job queuing systems.
Data-wise, you'll need a reference genome (uncompressed) and some sequencing data for your samples. The latter can be either raw fastq files, bam alignments to the reference, or accession numbers for already published fastq files.
"},{"location":"getting-started/#deploying-the-workflow","title":"Deploying the workflow","text":"The pipeline can be deployed in two ways: (1) using Snakedeploy which will deploy the pipeline as a module (recommended); (2) clone the repository at the version/branch you prefer (recommended if you will change any workflow code).
Both methods require a Snakemake environment to run the pipeline in.
"},{"location":"getting-started/#preparing-the-environment","title":"Preparing the environment","text":"First, create an environment for Snakemake, including Snakedeploy if you intend to deploy that way:
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy\n
If you already have a Snakemake environment, you can use that, so long as it has snakemake
(not just snakemake-minimal
) installed. Snakemake versions >=7.25 are likely to work, but most testing is on 7.32.4. It is compatible with Snakemake v8, but you may need to install additional plugins for cluster execution due to the new executor plugin system. See the Snakemake docs for what additional executor plugin you might need to enable cluster execution for your system.
Activate the Snakemake environment:
conda activate snakemake\n
"},{"location":"getting-started/#deploying-with-snakedeploy","title":"Deploying with Snakedeploy","text":"Make your working directory:
mkdir -p /path/to/work-dir\ncd /path/to/work-dir\n
And deploy the workflow, using the tag for the version you want to deploy:
snakedeploy deploy-workflow https://github.com/zjnolen/PopGLen . --tag v0.2.0\n
This will generate a simple Snakefile in a workflow
folder that loads the pipeline as a module. It will also download the template config.yaml
, samples.tsv
, and units.tsv
in the config
folder.
Go to the folder you would like you working directory to be created in and clone the GitHub repo:
git clone https://github.com/zjnolen/PopGLen.git\n
If you would like, you can change the name of the directory:
mv PopGLen work-dir-name\n
Move into the working directory (PopGLen
or work-dir-name
if you changed it) and checkout the version you would like to use:
git checkout v0.2.0\n
This can also be used to checkout specific branches or commits.
"},{"location":"getting-started/#configuring-the-workflow","title":"Configuring the workflow","text":"Now you are ready to configure the workflow, see the documentation for that here.
"},{"location":"high-memory-rules/","title":"Rules using large amounts of RAM","text":"NOTE: This is a work in progress list. Trying to figure out what
The biggest challenge with using this pipeline with other datasets is ensuring RAM is properly allocated. Many rules require very little RAM, and so the default allocations that come on your cluster per thread will likely do fine. However, some rules require considerably more RAM. These are:
"}]} \ No newline at end of file diff --git a/develop/sitemap.xml.gz b/develop/sitemap.xml.gz index 2654789..ef9f5b2 100644 Binary files a/develop/sitemap.xml.gz and b/develop/sitemap.xml.gz differ