-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizing hyperparameters #19
Comments
Hi! Let me bump this. It would be great to get some input on tuning, e.g., bs_size, patch_size, etc. |
Hi @sdalumlarsen, thank you for your interest in and for using our method. Regarding the blind spot size, please refer to my previous comment on issue #18. Increasing the We usually keep the The In summary, we generally follow the default settings for processing data, except for adjustments to blind spot size and unet_channels. Let me know if you have any further questions. |
Hi Steve, thanks a lot - this is helpful. One further thing: we sometimes get low amplitude patch artefacts in (non trainingset) data. Is this something you have observed? [TR: edit - I assume, this is linked to https://github.com//issues/21#issuecomment-2613736317] |
Hmm, We've rarely encountered patch artifacts. Your observation seems relevant to that comment in issue#21. If the patch_interval parameter is relatively large, close to the patch_size, patch artifacts may occur, especially when the patch_size is small. As I mentioned in issue #21, I recommend setting the patch_interval to half the patch_size in the x and y axes. |
Doing this now. Also: testing different training parameters and a different training set. There could have been some background illumination parameters that did not fit to this test data well. |
We currently train with [61, 128, 128] patch_size (default). Our data is 256x256, though. Would you recommend decreasing the training patch size to 64x64 and then setting the inference patch_interval to 32x32? |
I think you don't need to decrease the patch_size and patch_interval, as doing so may not resolve the stitching artifact issue. If the current patch artifact is due to a mismatch between training and test data, you could train another model specifically on the dataset where the artifact appears (the test data with the artifact) and check whether training on this data resolves the issue. If the artifact disappears, it may indicate that the problem stems from a large difference between the training and test data. |
Hi SUPPORT,
I have used your system extensively on a number of volumetric datasets and I am very pleased with the result. However, I would still like to see if I can improve the denoising. Obviously, some parameters, such as the blind spot size, are VERY dependent on the nature of the data, but I was wondering if the default values for the capacity given by channel sizes, the depth and the batch size were a tradeoff between performance and training/inference time or if they actually represent an approximate optimum for performance in the face of overfitting etc. This would be for large volumetric datasets with a size of lets say (1500,1500,10000).
If you would prefer, we can communicate by email as well, I just thought any potential answers could be useful to others.
Thank you for your time and this wonderful tool.
The text was updated successfully, but these errors were encountered: