-
Download the MUG500+ Dataset: [Data Repo], [Descriptor]
-
Unzip and Extract the .NRRDs into one folder
from pathlib import Path
import shutil
pathlist = Path('./9616319').glob('**/*.nrrd')
for path in pathlist:
path_in_str = str(path)
shutil.copyfile(path_in_str, './complete_nrrds/'+path_in_str[-10:-5]+'.nrrd')
print(path_in_str)
- Denoise and Crop the Skulls
The denoising codes come from this repository.
The axial dimension of all the skull images are cropped to 256.
If the axial dimension is smaller than 256, zero padding can be used.
- Create Facial and Cranial Defects on the Skulls
facialDefects.py #create defects around the face
cranialDefects.py #create defects around the cranium
- Convert NRRDs to Nifti files (for MONAI Dataset loader)
#codes attributes to the stack overflow anser:
#https://stackoverflow.com/questions/47761353/nrrd-to-nifti-file-conversion
import vtk
def readnrrd(filename):
"""Read image in nrrd format."""
reader = vtk.vtkNrrdReader()
reader.SetFileName(filename)
reader.Update()
info = reader.GetInformation()
return reader.GetOutput(), info
def writenifti(image,filename, info):
"""Write nifti file."""
writer = vtk.vtkNIFTIImageWriter()
writer.SetInputData(image)
writer.SetFileName(filename)
writer.SetInformation(info)
writer.Write()
baseDir = './complete_nrrds/'
files = glob(baseDir+'/*.nrrd')
print(files)
for file in files:
m, info = readnrrd(file)
fname=baseDir+'nifty/'+file[-10:-5]+ '.nii.gz'
writenifti(m,fname,info)
-
Split Training and Test Set
-
Alternatively, you can directly download the training-ready datasets from here
The MONAI codes are adapted from the MONAI 3D Spleen segmentation example
The codes are tested with the following software and hardware:
software:
monai: 0.8.1
pytorch: 1.11.0
hardware:
NVIDIA GeForce RTX 3090 (24GB RAM)
Recommended GPU RAM >=24GB
- Training Your MONAI Model
python monaiSkull.py --phase train # Training
python monaiSkull.py --phase test # test, generate predictions (complete skulls) for test data
- Alternatively, you can try out the pre-trained model
- Clone this repository
- Download the pre-processed dataset
- Unzip and move dataset folder into the current directory of the repository
- Evaluate on the test set (or your own skull data pre-processed the same way as the dataset):
# change the test_images directory if you want to test on your own skull data
python monaiSkull.py --phase test
If you use the dataset and/or the pre-trained model in your research, please consider citing the following:
@article{li2021mug500+,
title={MUG500+: Database of 500 high-resolution healthy human skulls and 29 craniotomy skulls and implants},
author={Li, Jianning and Krall, Marcell and Trummer, Florian and others},
journal={Data in Brief},
volume={39},
pages={107524},
year={2021},
publisher={Elsevier}
}
and,
@incollection{li2020baseline,
title={A baseline approach for AutoImplant: the MICCAI 2020 cranial implant design challenge},
author={Li, Jianning and Pepe, Antonio and Gsaxner, Christina and Campe, Gord von and Egger, Jan},
booktitle={Multimodal Learning for Clinical Decision Support and Clinical Image-Based Procedures},
pages={75--84},
year={2020},
publisher={Springer}
}