You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To integrate the HoVer-Net model into a segmentation pipeline that is processing images as Numpy arrays already loaded in memory, is it possible to use the SlidingWindowInferer() on image already loaded into Python?
I want to use the model already trained to do segmentation only (I am not interested in cell type classification), because I do not have annotated data for fine tuning. We have real experimental images that need segmentation and thus no ground truth.
So far, following the README and the tutorial notebooks, I have been able to either:
Feed a cropped image to the model (c.f. code below), however:
I do not know how to post-process the output to get the actual segmentation as I would with a SlidingWindowInferer();
this would require to write the sliding window algorithm by hand to process the whole image;
I am not sure which type of normalization I should implement.
Use the SlidingWindowInferer() to process an image stored on disk (c.f. code below), however:
the segmentation result is rubbish (maybe related to the following point);
since I am using the pre-training model, which checkpoint_path am I supposed to provide?
# define the final activations for each model outputout_activations= {"hovernet": "tanh", "type": "softmax", "inst": "softmax"}
# define whether to weight down the predictions at the image boundaries# typically, models perform the poorest at the image boundaries and with# overlapping patches this causes issues which can be overcome by down-# weighting the prediction boundariesout_boundary_weights= {"hovernet": True, "type": False, "inst": False}
# define the infererinferer=csmp.inference.SlidingWindowInferer(
model=model,
input_path=f"{image_dir}",
checkpoint_path=None,
out_activations=out_activations,
out_boundary_weights=out_boundary_weights,
instance_postproc="hovernet", # THE POST-PROCESSING METHODnormalization="percentile", # same normalization as in trainingpatch_size=(256, 256),
stride=128,
padding=80,
batch_size=1,
device="cpu"# or "cuda"
)
# inferenceinferer.infer()
# resultinferer.out_masks
In the example, I use the following TCGA-E2-A14V-01Z-00-DX1.png image from the MoNuSeg dataset available under CC BY-NC-SA 4.0 license):
Thanks in advance.
The text was updated successfully, but these errors were encountered:
I'm currently working on a release in which you can use numpy arrays or torch tensors as the input to the SlidingWindowInferer but at the moment there is no support for that. The release will be published within a couple weeks
The normalization of the image should always be the same what was used during training/finetuning the segmentation model.
When you train with pannuke like in the notebook example linked above, you will receive also cell type classes as outputs but if there is no need for them you can just ignore them and focus on the instance segmentation results 'inst_map'
Hi,
To integrate the HoVer-Net model into a segmentation pipeline that is processing images as Numpy arrays already loaded in memory, is it possible to use the
SlidingWindowInferer()
on image already loaded into Python?I want to use the model already trained to do segmentation only (I am not interested in cell type classification), because I do not have annotated data for fine tuning. We have real experimental images that need segmentation and thus no ground truth.
So far, following the
README
and the tutorial notebooks, I have been able to either:SlidingWindowInferer()
;SlidingWindowInferer()
to process an image stored on disk (c.f. code below), however:checkpoint_path
am I supposed to provide?In the example, I use the following
TCGA-E2-A14V-01Z-00-DX1.png
image from the MoNuSeg dataset available under CC BY-NC-SA 4.0 license):Thanks in advance.
The text was updated successfully, but these errors were encountered: