Augmentations for Semantic segmentation #806
Elmisiry99
started this conversation in
General
Replies: 1 comment 1 reply
-
Hi, @Elmisiry99. Hard question. This is an example of what I've used in the past, in case it can help: https://github.com/fepegar/resseg-ijcars/blob/963e5548fb02c777038ef550c969149377071cfc/datasets.py#L191-L222 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello @fepegar,
I am working on medical decathlon datasets(Hippocampus and prostate, each one separately) to test a contrastive learning algorithm on unlabeled data using Unet. It starts first by pretraining the encoder using a global contrastive loss and saving its parameters to be loaded subsequently in an encoder extended with specific number of decoder(not the whole Unet). The encoder's parameters are freezed so that they don't get updated and the decoder part is now trained using a local contrastive loss. Finally the encoder's as well as the decoder's parameters are saved to be loaded in a full Unet and finetuned on a few labeled data.
In your experience, which torchio intensity transformations are best suited for augmentation in this task. The data are MRI images.
Thanks a lot of advance
Beta Was this translation helpful? Give feedback.
All reactions