-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider binarizing the ground truth after averaging across contrasts, before training a softseg model #99
Comments
I agree with these points. It would simplify the next steps. I think binarazing at 0.5 the GT makes sense, that it will represent the average of the 6 contrasts, but not enconding the registration errors When we trained with nnunet, we actually did binarize the GT and nnunet gave sharper boyndary/ adapted better to the shape of the cord than the MONAI (e.g. for spinal cord compressions). Maybe this will help for this too! |
Thank you for discussion these important points and summarizing them here. For points 2-5, I have nothing else to add as I agree with all of them, and they quite succintly describe the issues we've been facing so far! In point 1:
This might have already been the case. The current version of the dataset is dominated by images/contrasts with smaller FoV (i.e. T2*, MTon, MToff, DWI) and the model has only seen blurry/oversegmented GT (due to possible mis-registration errors) during training. As a result, its tendency is to output predictions with (relatively) more voxels outside SC for these contrasts compared to T1w and T2w. As for the solutions, I have one question:
This is definitely true. This also means much quicker analyses on the lifelong learning aspect of the model. |
Just to have it documented, here's the comparison between: (i) soft output by the model trained directly on soft masks ( This gif here was discussed in one of our meetings, showing that the model trained with binarized soft average GT is better at estimating the partial volume at the boundary for the T2star image. Note the size of the ring of soft values decreasing for the |
closing this as most of the recently released models were trained with the softseg approach |
Context
I reflected about the excellent discussion we had yesterday, and I came to the conclusion that keeping a soft mask after doing the averaging across all contrasts is quite problematic for several reasons:
Solution
For all these reasons, I am wondering if binarizing (with 0.5 threshold) the ground truth after averaging would solve many of our problems:
Related to:
The text was updated successfully, but these errors were encountered: