Replies: 6 comments 4 replies
-
Hi @atnassar, it looks like there's some nii volume that has a smaller dimension than what you specified in your ROI crop size. After applying |
Beta Was this translation helpful? Give feedback.
-
For different tasks, most possibly you can not directly reuse the transforms / configurations purposed for prostate to liver, given that they are different modalities and different organs. You will need to write your own customized learner similar to this one: https://github.com/NVIDIA/NVFlare/blob/main/examples/prostate/pt/learners/supervised_prostate_learner.py |
Beta Was this translation helpful? Give feedback.
-
We just had the NVFlare Dev Day at GTC https://www.nvidia.com/gtc/session-catalog/?search=SE1991&search=SE1991&tab.scheduledorondemand=1583520458947001NJiE#/session/1638904712976001sY6W |
Beta Was this translation helpful? Give feedback.
-
The training transforms are defined here in the prostate example |
Beta Was this translation helpful? Give feedback.
-
I see. There's a training configuration file: |
Beta Was this translation helpful? Give feedback.
-
Hi @atnassar, was your issue resolved? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am following the steps in the prostate example for liver & liver tumor segmentation in a similar way the prostate example was implemented.
I used the LITS dataset (liver and liver tumor segmentation challenge) which contains 131 nifti images (512x512) for the volumes and 131 nii images for the segmentations. I split the 131 nii images and masks on 4 sites and ran the experiments similar to the prostate example.
I faced the below errors that need your help to resolve:
Intel MKL ERROR: Parameter 4 was incorrect on entry to cblas_dgemm.
pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1
LearnerExecutor - ERROR - [run=2, peer=example_project, peer_run=2, task_name=train, task_id=ac3edbb7-7c99-4988-95ff-22e3b42ef483]: learner execute exception: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/monai/data/utils.py", line 274, in list_data_collate
return default_collate(data)
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 224, 224, 32] at entry 0 and [1, 224, 129, 32] at entry 2
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/anaconda3/envs/nvflare/lib/python3.8/site-packages/monai/data/utils.py", line 285, in list_data_collate
raise RuntimeError(re_str) from re
RuntimeError: stack expects each tensor to be equal size, but got [1, 224, 224, 32] at entry 0 and [1, 224, 129, 32] at entry 2
MONAI hint: if your transforms intentionally create images of different shapes, creating your
DataLoader
withcollate_fn=pad_list_data_collate
might solve this problem (check its documentation).** Note: I think [224, 224, 32] is the ROI size used in the prostate example but not sure what it should be for liver and liver tumor segmentation.
Beta Was this translation helpful? Give feedback.
All reactions