-
Notifications
You must be signed in to change notification settings - Fork 6
generate data
The compare mutant lines against baseline specimens, we first need to map the data into the same space a the population average. One this has been completed, we will have the following data that can be analysed by the stats pipeline:
- Intensity images that can be compared voxel level.
- Jacobian determinants that describe local expansion/shrinkage during the registration process relative to the population average
- If a label map is given, this will be inverted back onto the inputs so that organ volumes for each specimen can be calculated.
Create a project root directory for the mutant data. It should contain sub-directories for each line to be analysed in which the input volumes should be placed. For example:

Run the job_runner.py script with the following options
-c The config specifies how to perform the registration and is explained in the registration page
-r the input root directory create as above
example
$ cd lama
$ scripts/job_runner.py -c config_path.yaml -r mutants_input_root_dir
Create a directory the same as for the mutants, but we treat it as just running one line named baseline

run the baselines
$ cd lama
$ scripts/job_runner.py -c config_path.yaml -r baselines_input_root_dir
As well as running each image registration on multiple threads using, for example: threads: 8
in the config,
LAMA has a simple way of distributing tasks across different machines
job_runner.py creates a jobfile in the root directory where the input fodlers are called lama_jobs.csv
This file tracks the status of specimens to process. If you run another instance of job_runner.py on tghe same dataset, it will search for this jobs file and processs jobs that have a status of to_run
Run the stats pipeline
- For intensity, jacobian and organ volume analysis see stats pipeline
- If there is enough data (need to have a rough guess on this) organ_volume analysis can be done using a permutation based method