Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiThreadedAugmenter issues #94

Open
designer00 opened this issue Mar 24, 2022 · 4 comments
Open

MultiThreadedAugmenter issues #94

designer00 opened this issue Mar 24, 2022 · 4 comments

Comments

@designer00
Copy link

Problem:

Traceback (most recent call last):
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 36, in producer
data_loader.set_thread_id(thread_id)
AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'set_thread_id'
Exception in thread Thread-2:
Traceback (most recent call last):
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 92, in results_loop
raise RuntimeError("Abort event was set. So someone died and we should end this madness. \nIMPORTANT: "
RuntimeError: Abort event was set. So someone died and we should end this madness.
IMPORTANT: This is not the actual error message! Look further up to see what caused the error. Please also check whether your RAM was full

Traceback (most recent call last):
File "/data/home/zhangyinglin/workspace/pytorch-unet-cornea/transform.py", line 126, in
multithreaded_generator.next()
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 182, in next
return self.next()
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 206, in next
item = self.__get_next_item()
File "/data/home/zhangyinglin/anaconda3/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 190, in __get_next_item
raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full

Process finished with exit code 1

Code as follow:

from batchgenerators.transforms.spatial_transforms import SpatialTransform, MirrorTransform
from batchgenerators.dataloading.nondet_multi_threaded_augmenter import NonDetMultiThreadedAugmenter
from batchgenerators.dataloading.multi_threaded_augmenter import MultiThreadedAugmenter
from batchgenerators.dataloading.data_loader import DataLoaderFromDataset
from batchgenerators.transforms.abstract_transforms import Compose
import numpy as np
from utils.dataset import BasicDataset
from torch.utils.data import DataLoader
import os
import matplotlib.pyplot as plt

-------------------------------------------------------

data_path = '/data2/imed-data/zhangyinglin/Other_Dataset/'
data_name = 'Ciliary_split2'
train_dir = os.path.join(data_path, data_name, '', 'train/')
val_dir = os.path.join(data_path, data_name, '', 'val/')
final_weight = os.path.join(data_path, data_name, '', 'weights/final/')
body_weight = os.path.join(data_path, data_name, '', 'weights/body/')
canny_weight = os.path.join(data_path, data_name, '', 'weights/edge/')
dir_mask = os.path.join(data_path, data_name, "", "final_mask/")
canny_mask = os.path.join(data_path, data_name, "", "canny_mask/")
body_mask = os.path.join(data_path, data_name, "", "body_mask/")

img_size = 192
img_scale = 1

-------------------------------------------------------

transform = []

params = {'do_elastic': False,
'elastic_deform_alpha': (0., 200.),
'elastic_deform_sigma': (9., 13.),
'rotation_x': (-30. / 360 * 2. * np.pi, 30. / 360 * 2. * np.pi),
'rotation_y': (-30. / 360 * 2. * np.pi, 30. / 360 * 2. * np.pi),
'rotation_z': (-30. / 360 * 2. * np.pi, 30. / 360 * 2. * np.pi),
'rotation_p_per_axis': 1,
'do_scaling': True,
'scale_range': (0.7, 1.4),
'border_mode_data': 'constant',
'random_crop': False,
'p_eldef': 0.2,
'p_scale': 0.2,
'p_rot': 0.2,
'independent_scale_factor_for_each_axis': 1,
'patch_size': np.asarray([192, 192])
}

transform.append(SpatialTransform(
patch_size=params['patch_size'], patch_center_dist_from_border=None,
do_elastic_deform=False, alpha=params['elastic_deform_alpha'],
sigma=params['elastic_deform_sigma'],
do_rotation=True, angle_x=params["rotation_x"], angle_y=params["rotation_y"],
angle_z=params["rotation_z"], p_rot_per_axis=params["rotation_p_per_axis"],
do_scale=params["do_scaling"], scale=params["scale_range"],
border_mode_data=params["border_mode_data"], border_cval_data=0, order_data=3,
border_mode_seg="constant", border_cval_seg=-1,
order_seg=1, random_crop=params["random_crop"], data_key='data', label_key='seg', p_el_per_sample=params["p_eldef"],
p_scale_per_sample=params["p_scale"], p_rot_per_sample=params["p_rot"],
independent_scale_for_each_axis=params["independent_scale_factor_for_each_axis"]
))

if name == 'main':
tr_transform = Compose(transform)
train_loader = DataLoader(train, batch_size=1, shuffle=True, num_workers=4, pin_memory=True)
batchgen = iter(train_loader)

multithreaded_generator = MultiThreadedAugmenter(batchgen, tr_transform, 1, 1, seeds=None)
multithreaded_generator.next()

@designer00
Copy link
Author

I use the MultiThreadedAugmenter to apply transform, but suffer problems. How can I solve this? I wish the library can be modify to be compatiable with the transform interface of pytorch, so that it can be applied easily.

@FabianIsensee
Copy link
Member

Hi, can you please post the entire error message? There are parts missing, all I see is the workers telling me that something happened but the important bit isn't there.
Simple way to debug would be to use SingleThreadedAugmenter for development. It will give clearer error messages. Once that works you can just replace it with MultiThreadedAugmenter and it will work

@designer00
Copy link
Author

designer00 commented Mar 28, 2022 via email

@FabianIsensee
Copy link
Member

does it work now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants