Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No detection or inverse results #2

Open
JRevati opened this issue Feb 6, 2017 · 13 comments
Open

No detection or inverse results #2

JRevati opened this issue Feb 6, 2017 · 13 comments

Comments

@JRevati
Copy link

JRevati commented Feb 6, 2017

Hi Nikki,

I am trying to replicate your model for brain tumour segmentation to explore image analysis tools and algorithms. After carefully implementing your model, I got either completely blank predictions or negated results (image had slices made in the area where tumour is absent) . I am using your pre-trained model with available weights downloaded from this repository. Could you please help me improve my result?
Few changes I made are listed here:
1.The BasicModel() used in SegmentationModels.py seems undefined, I replaced it with SegmentationModel().

  1. I downloaded the BRATS2015 dataset as it is. Are there any changes made to the same before using?

Thanks in advance.

@build2create
Copy link

@JRevati Same problem. Once the model is saved in models/examples.json and we test we get IO error
IOError: cannot identify image file <open file '/home/adminsters/Documents/Training/HGG/brats_tcia_pat165_0001/VSD.Brain.XX.O.MR_T1c.40873/VSD.Brain.XX.O.MR_T1c.40873.mha', mode 'rb' at 0x7ff4cae90660>

But the file brats_tcia_pat165_0001/VSD.Brain.XX.O.MR_T1c.40873/VSD.Brain.XX.O.MR_T1c.40873.mha is at correct location

@JRevati
Copy link
Author

JRevati commented Feb 13, 2017

@build2create Can you specify where exactly do you get this error, is it in brain_pipeline.py ? . In my case the T1c file name gets a suffix "*_n" before .mha extension (after running the script n4_bias_correction.py). So while creating basically, it searches for VSD.Brain.XX.O.MR_T1c.36175_n.mha file instead as per the code ( t1_n4 = glob(self.path + '/*T1*/*_n.mha') in brain_pipeline ). Did you run and check the output of the same script before?

@build2create
Copy link

@JRevati @naldeborgh7575 Basically this is what I did in steps
Step 1. Rann4_bias_correction(modified specifying the arguments as in brain_pipeline namely path etc and saved to respective folder with _n.mha for all T1 t1 c type)
Step 2. Ran brain_pipeline.py (uncommenting the code for slices) with norm=n4 that saved slices to n4_PNG (see the comments in the code). I did not do other 2 normalisation. Next commented that and ran the code to save the labels for ground truth## (# Doubt 1 where is that Label/ * folder used ahead?)
Step 3. Ran the Segmentation_Models.py. I replaced this train_data = glob('train_data/**') by train_data=glob('n4_PNG/*').# (Doubt 2: Is this ok? n4_PNG contains all the n4 normalised images) The sequential model ran for 10 epochs each taking appoximately 1800 seconds
Step 4 :Swap the comment and uncommented portion for running the Testing phase. Now here i replaced first for entire folder(i.e replace tests = glob('test_data/2_*') by my path for testing folder) unfortunately that didn,t work out.So tried for single image, now I get Value error on reshape for mha image (5,240,240) and it runs for png image though. # (Doubt 3: What must be the input to tests=glob(?))

The biggest doubt after all this is use of Label/* ground truth images( I believe that this path Original_Data/Training/HGG/**/*more*/**.mha is path to ground truth) Another big doubt is if we go for loading two-path CNN Graph() is deprecated according to latest keras documentation. Simply helpless at this point. Please help.

@JRevati
Copy link
Author

JRevati commented Feb 14, 2017

@build2create This is what I think,
Last question first and Doubt2 : As you have mentioned, Original_Data/Training/HGG/**/*more*/**.mha is the path to ground truth, moreover, this will be used to generate ground truth which will be appended to the strip created using other 4 scanned forms of images, ex. scans = [flair[0], t1[0], t1[1], t2[0], gt[0]] in brain_pipeline . Now if you haven't used the any other normalized forms I don't think that will cause any trouble. The labels are saved in a path that you provide in save_labels() method. Also, Graph is depricated in later versions, so I used Keras 1.1.1 for the same reason. From 1.2.1 onwards, Graph() will not work. If you are planning to train the model on your own, then you need to either use the alternatives of downgrade the keras version.

Doubt 3 : input to test glob should be the folder path where you have saved preprocessed test images (similar to the training ones) from downloaded BRATS_testing folder. In my case this works. Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In patch_library.py you will find a comment where it is explicitly mentioned that the images should have shape (5*240,240) for training images which also applies to test images.

Doubt 1 : the labels are used to feed y_train (ref. find_patches() where you provide labels folder path ) and later in calculating dice co-efficient etc.

I haven't used the two-path model, hence I am unable to comment on it, but hope the rest helps a bit.

@build2create
Copy link

build2create commented Feb 14, 2017

@JRevati Thanks for the reply. Just confirming as you said " Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In patch_library.py you will find a comment where it is explicitly mentioned that the images should have shape (5*240,240) for training images which also applies to test images." This means we have to convert test images in BRATS Training set to PNG and reshape to required dimensions,right?

Also another point current version of brain_pipeline.py generates n4_PNG( as folder of n4ITK normalized image see comments in code and line io.imsave('n4_PNG/{}_{}.png'.format(patient_num, slice_ix), strip) ). Here the dimension of each image is 1200X240 was that also in your case? Did you use modified code given by @umanghome in the Pull request section.

One last thing, you said you used pre trained model "I am using your pre-trained model with available weights downloaded from this repository." How did you do that what are steps for directly doing testing? Or do we need to train everytime before we test?

@build2create
Copy link

@JRevati Please give confirmation for above question. I am really stuck here.

@lazypoet
Copy link

@JRevati I got the same problem when I corrected and ran the code. Black images. I think @naldeborgh7575 did not upload the correct model. Because this is the same problem I faced.

@lazypoet
Copy link

lazypoet commented Apr 4, 2017

Okay, seems like there might be a problem with your normalization if your network is not learning

@ujjwalbaid0408
Copy link

Can you post which .py file we need to run first?

@Jiyya
Copy link

Jiyya commented Jul 25, 2017

brain_seg_workflow

@tiantian-li
Copy link

@JRevati I am running the code with BRATS2013,but some problems happen to me,so I want to try the BRATS2015 dataset.I can't download it successfully, sincerely hope you can help me for the datasets

@Jiyya
Copy link

Jiyya commented Dec 25, 2017 via email

@tiantian-li
Copy link

@Jiyya I have tried,but i can't download the brats2015 from http://www.smir.ch.If you have download it successfully,and if you can share it with me .Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants