-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No detection or inverse results #2
Comments
@JRevati Same problem. Once the model is saved in models/examples.json and we test we get IO error But the file |
@build2create Can you specify where exactly do you get this error, is it in |
@JRevati @naldeborgh7575 Basically this is what I did in steps The biggest doubt after all this is use of Label/* ground truth images( I believe that this path |
@build2create This is what I think, Doubt 3 : input to test glob should be the folder path where you have saved preprocessed test images (similar to the training ones) from downloaded BRATS_testing folder. In my case this works. Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In Doubt 1 : the labels are used to feed y_train (ref. I haven't used the two-path model, hence I am unable to comment on it, but hope the rest helps a bit. |
@JRevati Thanks for the reply. Just confirming as you said " Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In patch_library.py you will find a comment where it is explicitly mentioned that the images should have shape (5*240,240) for training images which also applies to test images." This means we have to convert test images in BRATS Training set to PNG and reshape to required dimensions,right? Also another point current version of brain_pipeline.py generates n4_PNG( as folder of n4ITK normalized image see comments in code and line One last thing, you said you used pre trained model "I am using your pre-trained model with available weights downloaded from this repository." How did you do that what are steps for directly doing testing? Or do we need to train everytime before we test? |
@JRevati Please give confirmation for above question. I am really stuck here. |
@JRevati I got the same problem when I corrected and ran the code. Black images. I think @naldeborgh7575 did not upload the correct model. Because this is the same problem I faced. |
Okay, seems like there might be a problem with your normalization if your network is not learning |
Can you post which .py file we need to run first? |
@JRevati I am running the code with BRATS2013,but some problems happen to me,so I want to try the BRATS2015 dataset.I can't download it successfully, sincerely hope you can help me for the datasets |
www.smir.chdownload it from here
On Friday, December 22, 2017 7:24 AM, tiantian-li <notifications@github.com> wrote:
@JRevati I am running the code with BRATS2013,but some problems happen to me,so I want to try the BRATS2015 dataset.I can't download it successfully, sincerely hope you can help me for the datasets—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@Jiyya I have tried,but i can't download the brats2015 from http://www.smir.ch.If you have download it successfully,and if you can share it with me .Thanks |
Hi Nikki,
I am trying to replicate your model for brain tumour segmentation to explore image analysis tools and algorithms. After carefully implementing your model, I got either completely blank predictions or negated results (image had slices made in the area where tumour is absent) . I am using your pre-trained model with available weights downloaded from this repository. Could you please help me improve my result?
Few changes I made are listed here:
1.The BasicModel() used in SegmentationModels.py seems undefined, I replaced it with SegmentationModel().
Thanks in advance.
The text was updated successfully, but these errors were encountered: