-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimisation approaches for cutting many cells in close proximity while preserving membrane Integrity #5
Comments
Hi Sophia,
Thanks for including us in the discussion.
Just one thought, it would be good if one knew the exact number of contours per sample after filtering before going to the lmd. We need to make sure that when we upload the contours into the lmd software that we don’t find by surprise that there are less contours left than we need/want per single sample.
Best
Fabian
…----
Dr. rer. nat. Fabian Coscia
Spatial Proteomics Group
Max Delbrück Center for Molecular Medicine
Robert-Rössle-Straße 10
Building 31.2, Room 0221
13125 Berlin
Germany
Am 21.02.2022 um 19:01 schrieb Sophia Mädler ***@***.***>:
As has been discussed offline it would be ideal to develop an optimisation approach to allow for the cutting of cells in close proximity while preserving membrane integrity. I have created this GitHub Issue to discuss in more detail with all interested parties ***@***.*** <https://github.com/fabsen-87>, @josenimo <https://github.com/josenimo>, @LisaSchweizer <https://github.com/LisaSchweizer>) to ensure that we are implementing a tool that meets everyone's needs.
Based on the feedback I have received so far a first approach could be to selectively eliminate individual cells from densely segmented slide areas in such a way that we make sure that the membrane stays intact and we don't collect any "wrong" membrane areas and also don't lose membrane integrity. This would probably be implemented in an iterative fashion using proximity and area filters. We would of course lose some cells but would in return be able to collect all other cells without risk of contamination/destroying the sample.
One question that @GeorgWa <https://github.com/GeorgWa> and I had was at what point in the processing pipeline it would make the most sense from your perspective to implement such a filter:
(1) when loading the segmentation mask or
(2) when actually generating the cutting XML
In addition if you have any specific requirements that such a tool would need to fulfil it would be great if you could quickly outline them here.
—
Reply to this email directly, view it on GitHub <#5>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/APCR26TGEH4KGEK2RNOC7K3U4J4YLANCNFSM5O7IMOCQ>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.
|
Hi Fabian, there is already a function for shape collections called Collection.stats() sl = SegmentationLoader(config = loader_config, verbose = False)
shape_collection = sl(segmentation,
cell_sets,
calibration_points)
shape_collection.stats() It will give you information on the number of shapes and the number of vertices:
I've used it so far to optimize the compression of shapes. Best, |
Great, thanks Georg! We will try it out and report.
Best
Fabian
… Am 22.02.2022 um 20:25 schrieb Georg Wallmann ***@***.***>:
Hi Fabian,
there is already a function for shape collections called Collection.stats()
You could call this after loading a segmentation similar to:
sl = SegmentationLoader(config = loader_config, verbose = False)
shape_collection = sl(segmentation,
cell_sets,
calibration_points)
shape_collection.stats()
It will give you information on the number of shapes and the number of vertices:
===== Collection Stats =====
Number of shapes: 7
Number of vertices: 4,913
============================
Mean vertices: 702
Min vertices: 599
5% percentile vertices: 617
Median vertices: 687
95% percentile vertices: 811
Max vertices: 839
I've used it so far to optimize the compression of shapes.
Let me know if this is what you are looking for.
Best,
Georg
—
Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/APCR26VPN3MNFZCKF7JWH7DU4PPLNANCNFSM5O7IMOCQ>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.
|
Regarding the cutting strategy @sophiamaedler @GeorgWa, to me it makes more sense to implement the cutting path at the XML export step. I am not sure exactly what loading the segmentation mask means. Is this implementation only changing the order the contours are cut, or are some contours just skipped for the sake of the integrity of the membrane? Best, |
Hi @josenimo, |
Hey @sophiamaedler, In my mind having an exact number of cells for each group is not essential, our goal is to get enough cells for each group to observe their proteins. Thoughts @fabsen-87 ? |
As has been discussed offline it would be ideal to develop an optimisation approach to allow for the cutting of cells in close proximity while preserving membrane integrity. I have created this GitHub Issue to discuss in more detail with all interested parties (@fabsen-87, @josenimo, @LisaSchweizer) to ensure that we are implementing a tool that meets everyone's needs.
Based on the feedback I have received so far a first approach could be to selectively eliminate individual cells from densely segmented slide areas in such a way that we make sure that the membrane stays intact and we don't collect any "wrong" membrane areas and also don't lose membrane integrity. This would probably be implemented in an iterative fashion using proximity and area filters. We would of course lose some cells but would in return be able to collect all other cells without risk of contamination/destroying the sample.
One question that @GeorgWa and I had was at what point in the processing pipeline it would make the most sense from your perspective to implement such a filter:
(1) when loading the segmentation mask or
(2) when actually generating the cutting XML
In addition if you have any specific requirements that such a tool would need to fulfil it would be great if you could quickly outline them here.
The text was updated successfully, but these errors were encountered: