Trained on LION data
- Clone the repository
- Download the prerequisites from environment.yml
- From Releases
- Download 3dunet_model_2_channel.pth.tar to '/TrainedModels/3DUNet_2_Channel'
- Copy the inference dataset into '/Custom_Datasets/<Your_Dataset_Name>'. This directory can have 2 filestructures
Your_Dataset_Name
Subjects_Dir
Subject_01
Subject_01_Fat_fused.nii.gz
Subject_01_Water_fused.nii.gz
...
Subject_ZZ...
Your_Dataset_Name
Subjects_Dir
Subject_01
Subject_01_V1
Subject_01_V1_Fat_fused.nii.gz
Subject_01_V1_Water_fused.nii.gz
Subject_01_V2
...
...
Subject_ZZ...
- Ensure that for each subject, the keywords denoting Fat, Water, T2* images remain identical.
- While using the appropriate args, run /Executables/predict_custom_input.py
- --dataset_name: Enter the name of the dataset <Your_Dataset_Name>. By default, it is Dataset_Name
- --fat_keyword: Enter the unique keyword for Fat maps. By default, it is Fat_fused
- --water_keyword: Enter the unique keyword for Water maps. By default, it is Water_fused
- --gpus GPU id. By default, it is set to 0
- The images will be resized to (256, 224, 72) for the 3DUNet into '/Custom_Datasets/<Your_Dataset_Name>/Interpolated_Subjects_Dir', and the corresponding segmentations into '/Custom_Datasets/<Your_Dataset_Name>/Predicted_Masks'