Skip to content

Commit

Permalink
Merge pull request #34 from uclahs-cds/cz-update-docker-nonroot
Browse files Browse the repository at this point in the history
Fixed the issue that docker was running as root
  • Loading branch information
zhuchcn authored Jan 13, 2021
2 parents 690169b + fe7d7fa commit b6d3576
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 8 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ After marking dup BAM files, the BAM files are then indexed by utilizing Picard
| `temp_dir` | yes | path | Absolute path to the directory where the nextflow's intermediate files are saved. |
| `save_intermediate_files` | yes | boolean | Save intermediate files. If yes, not only the final BAM, but also the unmerged, unsorted, and duplicates unmarked BAM files will also be saved. |
| `cache_intermediate_pipeline_steps` | yes | boolean | Enable cahcing to resume pipeline and the end of the last successful process completion when a pipeline fails (if true the default submission script must be modified). |
| `max_number_of_parallel_jobs` | yes | int | The maximum number of jobs or steps of the pipeline that can be ran in parallel. |
| `max_number_of_parallel_jobs` | no | int | The maximum number of jobs or steps of the pipeline that can be ran in parallel. Default is 1. Be very cautious setting this to any value larger than 1, as it may cause out-of-memory error. It may be helpful when running on a big memory computing node. |
| `bwa_mem_number_of_cpus` | no | int | Number of cores to use for BWA-MEM2. If not set, this will be calculated to ensure at least 2.5Gb memory per core. |
| `blcds_registered_dataset_input` | yes | boolean | Input FASTQs are from the Boutros Lab data registry. |
| `blcds_registered_dataset_output` | yes | boolean | Enable saving final files including BAM and BAM index, and logging files directory to the Boutros Lab Data registry. |
Expand Down
3 changes: 0 additions & 3 deletions pipeline/config/align-DNA.config
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,6 @@ params {
save_intermediate_files = false
cache_intermediate_pipeline_steps = false

// resource configuraton for entire pipeline
max_number_of_parallel_jobs = 1

// uncomment to manually set ncpus for bwa-mem2
// bwa_mem_number_of_cpus = 60

Expand Down
15 changes: 11 additions & 4 deletions pipeline/config/methods.config
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,21 @@ manifest {
name = "align-DNA"
author = "Benjamin Carlin"
description = "alignment pipeline for paired fastqs DNA samples"
version = "6.0.1"
version = "6.0.2"
}

params {
// resource configuraton for entire pipeline
max_number_of_parallel_jobs = 1

// tools and their versions
bwa_version = "bwa-mem2-2.1"
}

docker {
enabled = true
sudo = false
runOptions = "-u \$(id -u):\$(id -g)"
}

methods {
Expand All @@ -32,14 +38,15 @@ methods {
def pattern = ~/^(?<baseDir>(?<mntDir>\/\w+)\/data\/(?<diseaseId>\w+)\/(?<datasetId>\w+)\/(?<patientId>\w+)\/(?<sampleId>[A-Za-z0-9-]+)\/(?<analyte>.+)\/(?<technology>.+))\/raw\/FASTQ\/.+$/

// First check if all input fastq files are from the same sample_id
base_dirs = fastqs.each {
// TODO: figure out why .each {} does not work any more.
def base_dirs = []
fastqs.each {
def matcher = it =~ pattern
if (!matcher.matches()) {
throw new Exception("The input path ${it} isn't a valid blcds-registered path.")
}
return matcher.group("baseDir")
base_dirs.push(matcher.group("baseDir"))
}
.unique(false)

if (base_dirs.size() > 1) {
throw new Exception(
Expand Down

0 comments on commit b6d3576

Please sign in to comment.