Skip to content

Run HaploC tools with slurm

Yuanlong LIU edited this page Jan 1, 2024 · 2 revisions

Hi-C data processing is typically computationally intensive. HaploC-tools is mainly designed to be run on a cluster using multiple CPU cores. We provide a one line command to run the full HaploC and downstream analysis

Example usage on a demo dataset with default settings

HaploC-tools/bin/haploc_slurm.sh -d demo_data

When necessary, the memory, CPU core and time setting should be reset based on Hi-C data size and cluster capability. The current memory and CPU setting has been tested for Hi-C dataset with 500M to 1500M reads

Next steps