Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving SC MP2RAGE manual segmentations #268

Open
Nilser3 opened this issue Oct 6, 2023 · 14 comments
Open

Improving SC MP2RAGE manual segmentations #268

Nilser3 opened this issue Oct 6, 2023 · 14 comments

Comments

@Nilser3
Copy link

Nilser3 commented Oct 6, 2023

Description

It has been observed that manual segmentations of SC in MP2RAGE datas (basel-mp2rage marseille-3T-mp2rage) have some issues like:

image

This issue is for improve these SC segmentations
Here the first QC in some MP2RAGE subjects: https://amubox.univ-amu.fr/s/FBAfYqcGwGXRGRy

Related issues

#266 #267

@jcohenadad
Copy link
Member

jcohenadad commented Oct 6, 2023

@Nilser3 as discussed and agreed on (#266 (comment)) the plan was to apply the contrast-agnostic model on these data, and then manually correct the output segmentations. Was there a misunderstanding somewhere?

Also, when sharing the QC, pls also add the current segmentation so we can compare with your corrections.

@Nilser3
Copy link
Author

Nilser3 commented Oct 6, 2023

understood,
I attach the QC here , where we can see:

  • GT : _label-SC_seg.nii.gz
  • contrast-agnostic model (binary thr 0.5) : _pred_bin.nii.gz
  • Nilser correction: _rater2.nii.gz

@jcohenadad
Copy link
Member

@Nilser3 based on what I am seeing, you corrected "_label-SC_seg.nii.gz", not "_pred_bin.nii.gz". This is different than what is described in #268 (comment).

@Nilser3
Copy link
Author

Nilser3 commented Oct 6, 2023

okay,
To do the manual correction I used both masks (GT and pred_bin),
The main difference is the starting point of the SC (image attached)
image

So I'll go back to making corrections based on the "_pred_bin.nii.gz" alone.

@jcohenadad
Copy link
Member

To do the manual correction I used both masks (GT and pred_bin)

What do you mean by "I used both"? Did you mean visually, or algorithmically (eg: summation of both masks), or other? Please elaborate.

When flipping back-and-forth between the GT and _rater2, and then between pred_bin and _rater2, I notice that in most slices, the contour of the SC on _rater2 is more similar to GT than pred_bin. Given that the goal of the contrast-agnostic project is to avoid propagating the bias of the previous algo (deepseg_sc, which was applied to GT), we need to be very careful with choosing the starting point of the SC segmentation for manual correction. Tagging @sandrinebedard @naga-karthik @valosekj @plbenveniste who can further clarify if something is unclear in my explanation

@Nilser3
Copy link
Author

Nilser3 commented Oct 6, 2023

What do you mean by "I used both"? Did you mean visually, or algorithmically (eg: summation of both masks), or other? Please elaborate.

Sorry, I was not clear,
Indeed, I did a sum of the masks (GT and pred_bin), then a binarization of the result and finally I manually corrected this new mask

When flipping back-and-forth between the GT and _rater2, and then between pred_bin and _rater2, I notice that in most slices, the contour of the SC on _rater2 is more similar to GT than pred_bin

In general, the pred_bin masks seem eroded when compared to the GT, perhaps that is why the effect is seen that the corrections are more similar to the GT

@jcohenadad
Copy link
Member

jcohenadad commented Oct 6, 2023

In general, the pred_bin masks seem eroded when compared to the GT, perhaps that is why the effect is seen that the corrections are more similar to the GT

Exactly. And that is precisely the issue. By summing the two SC segmentations, you will keep the bigger one, which might not be the 'most accurate' one.

@Nilser3
Copy link
Author

Nilser3 commented Oct 6, 2023

Thanks you! Now it makes more sense to me,
So I will proceed to redo the segmentations but based only on the pred_bin masks

@Nilser3
Copy link
Author

Nilser3 commented Oct 10, 2023

Here is a manual correction, based only on the binarized images (pred_bin) of the contrast-agnostic masks (thr = 0.5001).

@jcohenadad
Copy link
Member

Here is a manual correction, based only on the binarized images (pred_bin) of the contrast-agnostic masks (thr = 0.5001).

You did a great job, Nilser! My only concern is that your manual correction seems to 'over-segment' compared to what the contrast-agnostic model produces. For example:

  • contrast-agnostic:
    image
  • your correction:
    image

I would suggest to not alter too much the outer boundaries of the contrast-agnostic model (at the risk of introducing a bias when re-training the model with active learning), and primarily focus on:

  • Fixing obvious under-segmentation caused by a lesion or artifact
    image
  • Fixing obvious missing segmentation at the edge of a volume:
    image

Tagging @sandrinebedard @naga-karthik @valosekj @plbenveniste in case they have additional feedback

Also, feel free to upload the ZIP directly in this issue, in case the AMU link breaks in the future

@jcohenadad
Copy link
Member

Proposed strategy: sct-pipeline/contrast-agnostic-softseg-spinalcord#84

@jcohenadad
Copy link
Member

jcohenadad commented Dec 14, 2023

@Nilser3
Copy link
Author

Nilser3 commented Jan 5, 2024

Here is a QC of manual correction on the entire marseille-3T-mp2rage dataset , based only on the binarized contrast-agnostic masks

Legend of QC maks

  • SC_seg.nii.gz -> sct_deepseg_sc, corrected by rater 1 (Samira)
  • SC_seg_rater2.nii.gz -> contrast-agnostic (version v2.0), binarized with threshold 0.5001 corrected by rater 2 (Nilser) + union with MS lesion masks (label-lesion_rater2.nii.gz)

@jcohenadad
Copy link
Member

I suggest we put a hold on the correction of the masks until this issue is settled: sct-pipeline/contrast-agnostic-softseg-spinalcord#99

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants