-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions/problems concerning the program pipeline #1
Comments
Hi ulah Thanks for your interest in the pipeline.
I'm not quite sure why you get this error. Which samtools version do you use? Thanks for pointing out the rigidness in the bam namings, I will fix that. best |
Hi Ilgar, Thanks for your fast response. To 1) Nice to know that you can provide your own genome file, I didn't assumed that from the documentation of the script. Concerning the error message: $ pip freeze Best, |
Hi Urs The reason we are interested in multi mapped reads is because some miRNAs and tRNAs are expressed from multi loci, so the reads are expected to be multi mapped and we didn't wanna lose them. In our experience, including the multimapped reads didn't introduce much bias. We used weighed approach to include multimapped reads, but with a trick of only considering multi mapped regions that are annotated (gencode, mirbase, gtrnadb etc). So, if a read maps to 10 different coords in the genome, and only 3 of them is annotated region, then each read contributes 1/3 to the final count of that annotated region (e.g small rna). I hope this was clear and let me know if you need further help Best |
Hi Ilgar, I see... But shouldn't it count 1/10 to the final count of the annotated region in your example? I think the likelihood that the read originates from the other regions cannot be ignored as you may have any sorts of off-target contamination?! Unfortunately, my samtools error remains. It looks to me like the following code in "remove_softclipped_reads_parallelized_v2.py" is the critical part: Could you confirm that it is save to comment these lines out? What should then the final name of the sorted, indexed bam file look like (Currently it is SRR3495785_unique.bam)? Best, |
Hi Ilgar, Here is what I did: (2) (3) (4) It would be great if you could help me here as I'd like to use your pipeline/data for an ongoing project. Best, |
Hi Ilgar & Urs, The issue with running the remove_softclipped_reads_parallelized_v2.py script, can be fixed by changing the following: Delete the +".bam" in line 62 The further scripts however rely on the .bam ending of the clipped files, so insert the +".bam" again 23 bam_out_sorted = ".".join(outbamTmp.split(".")[:-1]) + ".bam" At least that seems to done the trick for me. Kind Regards |
Hi eyay,
there are some things with the posted pipeline that I do not understand / do not work for me and it would be great if you could tell me, what I'm eventually doing wrong.
Prerequisite "GenomeFetch.py": Looking briefly into the script it doesn't seem to support hg38 which you were using for the mapping. For what do you need the script?
Remove UMI from reads: What is the difference between your script (remove_umi_from_fastq_v4.py) and the one from the umi_tools package (umi_tools extract)? The parallelization doesn't give me any speed advantage on the data?! Additionally, your output is a fq, why not compressing it which is supported by umi_tools extract?
It seems that you are using also multi mapped reads in later steps of the analysis, don't you? If so, why can you use those and why didn't you change the outFilterMultimapNmax parameter from Star, as everything above 20 multi hits will be flagged as unmapped?!
3-4) Star mapping and clipped read removal: I'm missing a lot of file name conversions here. From the star command, you are creating generic output sam files, but in the clipped read removal step, you require bam files that are split into unique and multi-mappings with very rigid naming convention. Though I could create files that satisfy the name requirements (I split them based on the NH field and compress and sort them afterwards), I'm getting now an error I do not understand:
`
python /usr/local/smallseq/src/remove_softclipped_reads_parallelized_v2.py -i 01_Mapping -o 01_Mapping_clipped -p 6 -n 0 -t 3
Traceback (most recent call last):
File "/usr/local/smallseq/src/remove_softclipped_reads_parallelized_v2.py", line 93, in
Parallel(n_jobs=o.numCPU)(delayed(remove_clipped_reads)(sample) for sample in samplenames_with_fullpath)
File "/usr/local/lib/python2.7/dist-packages/joblib/parallel.py", line 768, in call
self.retrieve()
File "/usr/local/lib/python2.7/dist-packages/joblib/parallel.py", line 719, in retrieve
raise exception
joblib.my_exceptions.JoblibSamtoolsError: JoblibSamtoolsError
Multiprocessing exception:
...........................................................................
/usr/local/smallseq/src/remove_softclipped_reads_parallelized_v2.py in ()
88 samplenames_with_fullpath.append(multi_bam)
89
90 path_outbam = os.path.join(o.outstardir, sample)
91 if not os.path.exists(path_outbam): safe_mkdir(path_outbam)
92
---> 93 Parallel(n_jobs=o.numCPU)(delayed(remove_clipped_reads)(sample) for sample in samplenames_with_fullpath)
94
95
96
97
...........................................................................
/usr/local/lib/python2.7/dist-packages/joblib/parallel.py in call(self=Parallel(n_jobs=6), iterable=<generator object >)
763 if pre_dispatch == "all" or n_jobs == 1:
764 # The iterable was consumed all at once by the above for loop.
765 # No need to wait for async callbacks to trigger to
766 # consumption.
767 self._iterating = False
--> 768 self.retrieve()
self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=6)>
769 # Make sure that we get a last message telling us we are done
770 elapsed_time = time.time() - self._start_time
771 self._print('Done %3i out of %3i | elapsed: %s finished',
772 (len(self._output), len(self._output),
Sub-process traceback:
SamtoolsError Fri Nov 25 10:47:38 2016
PID: 12961 Python 2.7.6: /usr/bin/python
...........................................................................
/usr/local/lib/python2.7/dist-packages/joblib/parallel.py in call(self=<joblib.parallel.BatchedCalls object>)
126 def init(self, iterator_slice):
127 self.items = list(iterator_slice)
128 self._size = len(self.items)
129
130 def call(self):
--> 131 return [func(*args, **kwargs) for func, args, kwargs in self.items]
func =
args = ('01_Mapping/SRR3495785/SRR3495785_unique.bam',)
kwargs = {}
self.items = [(, ('01_Mapping/SRR3495785/SRR3495785_unique.bam',), {})]
132
133 def len(self):
134 return self._size
135
...........................................................................
/usr/local/smallseq/src/remove_softclipped_reads_parallelized_v2.py in remove_clipped_reads(inbam='01_Mapping/SRR3495785/SRR3495785_unique.bam')
56
57 outbam.write(read)
58 reads_after_clip_removal += 1
59 outbam.close()
60 #sort and index the final bam file
---> 61 pysam.sort(bam_out, bam_out_sorted)
bam_out = '01_Mapping_clipped/SRR3495785/SRR3495785_unique_tmp.bam'
bam_out_sorted = '01_Mapping_clipped/SRR3495785/SRR3495785_unique'
62 pysam.index(bam_out_sorted+".bam", template=inbamPysamObj)
63
64 print reads_after_clip_removal, total_reads
65 print "%s percentage of removed reads = %.2f" %(p[-1], 100*(1 -reads_after_clip_removal/total_reads))
...........................................................................
/usr/local/lib/python2.7/dist-packages/pysam/utils.py in call(self=<pysam.utils.PysamDispatcher object>, *args=('01_Mapping_clipped/SRR3495785/SRR3495785_unique_tmp.bam', '01_Mapping_clipped/SRR3495785/SRR3495785_unique'), **kwargs={})
70 "%s returned with error %i: "
71 "stdout=%s, stderr=%s" %
72 (self.collection,
73 retval,
74 stdout,
---> 75 stderr))
stderr = '[bam_sort] Use -T PREFIX / -o FILE to specify te... Reference sequence FASTA FILE [null]\n'
76
77 self.stderr = stderr
78
79 # call parser for stdout:
SamtoolsError: 'samtools returned with error 1: stdout=, stderr=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files\nUsage: samtools sort [options...] [in.bam]\nOptions:\n -l INT Set compression level, from 0 (uncompressed) to 9 (best)\n -m INT Set maximum memory per thread; suffix K/M/G recognized [768M]\n -n Sort by read name\n -o FILE Write final output to FILE rather than standard output\n -T PREFIX Write temporary files to PREFIX.nnnn.bam\n -@, --threads INT\n Set number of sorting and compression threads [1]\n --input-fmt-option OPT[=VAL]\n Specify a single input file format option in the form\n of OPTION or OPTION=VALUE\n -O, --output-fmt FORMAT[,OPT[=VAL]]...\n Specify output format (SAM, BAM, CRAM)\n --output-fmt-option OPT[=VAL]\n Specify a single output file format option in the form\n of OPTION or OPTION=VALUE\n --reference FILE\n Reference sequence FASTA FILE [null]\n'
`
Why is samtools sort called again? And why is it looking for a reference fasta - a header is supplied?
Any help is much appreciated.
Best regards
The text was updated successfully, but these errors were encountered: