Replies: 7 comments 52 replies
-
Hi, @ruby0101. It's hard to diagnose without any code or data, but my guess is your |
Beta Was this translation helpful? Give feedback.
-
Hi @fepegar, I just wanted to jump into this thread instead of creating an entirely new one. Do you think there should be some sort of way for the user to configure the tolerance for comparing the consistency of some of the attributes, such as the spacing/affine? I have found that rounding errors occur when some files are converted between types (DICOM <-> NRRD, for example), but they should have almost no practical effect. Of course, in these cases I can simply resample the mask to the image space before, let's say, resampling them both to isotropy or doing some other transform. But this means I am having to resample twice in my preprocessing pipeline rather than just once (to isotropy), which I would prefer to avoid. What are your thoughts? PyRadiomics uses |
Beta Was this translation helpful? Give feedback.
-
Hi About the code, Simple transform does not care about different affine : i1 = tio.LabelMap('mask.nii')
i2 = tio.ScalarImage('T1.nii')
suj = tio.Subject({'t1':i2, 'mask':i1})
t = tio.CropOrPad([32,32,32]) in this case
just works fine for me. (any other tio transform too) it must then be that @mattwarkentin also use a WeightedSampler tio.data.WeightedSampler(64,'mask')
for path in tw(suj):
print(patch['index_ini']) does give me an error /data/romain/toolbox_python/torchio/torchio/data/sampler/weighted.py in __call__(self, subject, num_patches)
62 num_patches: Optional[int] = None,
63 ) -> Generator[Subject, None, None]:
---> 64 subject.check_consistent_space()
65 if np.any(self.patch_size > subject.spatial_shape):
66 message = (
/data/romain/toolbox_python/torchio/torchio/data/subject.py in check_consistent_space(self)
264 def check_consistent_space(self):
265 self.check_consistent_spatial_shape()
--> 266 self.check_consistent_affine()
267
268 def get_images_dict(
/data/romain/toolbox_python/torchio/torchio/data/subject.py in check_consistent_affine(self)
260 f'\n{pprint.pformat(image.affine)}'
261 )
--> 262 raise RuntimeError(message)
263
264 def check_consistent_space(self):
RuntimeError: Images "t1" and "mask" do not occupy the same physical space. |
Beta Was this translation helpful? Give feedback.
-
I am surprise this check is not needed elsewhere, but ok, one solution would be to remove it, So I would prefer, to add an argument to WeigtedSampler, to explicitly ask to not do the check ... tio.data.WeightedSampler(64,'mask', check_affine_consitency=False) May be an other solution (instead of creating a new transform, that make an affine copy) would be to specify it at the creation i2 = tio.ScalarImage('T1.nii')
i1 = tio.LabelMap('mask.nii', affine = i2.affine)
suj = tio.Subject({'t1':i2, 'mask':i1}) but the affine argument seems not to be taken into account In [73]: np.max(np.abs(suj.t1.affine - suj.mask.affine))
Out[73]: 0.00064480513012474 I thing this argument was only though for tensor input ... |
Beta Was this translation helpful? Give feedback.
-
@fepegar i’m getting this error trying to implement a cGan on T1 and T2 image sets. However, Croporpad does not seem to be working when subject dataset has multiple scaled image types. Is this possible or can you only have one scalar image and one mask? |
Beta Was this translation helpful? Give feedback.
-
Well the way it is implemented, you need to have the same spatial shape if you have multiple images in your subject but I see your point it make sense to have different shape (that is why you add this croporpad transform, to have the same shape at the end) It is possible to change the code : adding a loop over each subject image, to perform a specific crop and / or pad. (like it is done in the resample transform) |
Beta Was this translation helpful? Give feedback.
-
Hmm I think this is a bit delicate. If the images have different shapes, chances are they are not in the same physical space, so their orientation and spacing might be different. Using @ryancinsight, could you please share a pair of your images so we understand the issue a bit better? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am trying to train U-Net model for tumor segmentation with my MRI T1 image dataset. I just followed the tutorial
But when I call a train function, it gave me the following error:
I have already applied training_transform which is in the tutorial
tio.CropOrPad((48, 60, 48))
to the training and validation dataset. But the error message, it seems like transformation didn't work properly. The processed sample size is{'brain': (76, 76, 48), 'mri': (75, 75, 48)}
. Could you help me to solve it, please? What is a problem with it?Beta Was this translation helpful? Give feedback.
All reactions