Thank you for visiting this repository
This repo is the artefact of research project "Experiments on Restoring Color and Visual Fidelity to Legacy Photographs", undertaken by me and supervised by Dr.Salman Khan. Please contact me or Dr.Salman if you need the full paper.
With a multitude of applications, there are numerous researches on deep learning-automated image restoration. For experimental purposes, restoration of random old and degraded photos is an interesting problem domain, since they are more difficult to generalize due to the possible variation in the degradation type and strength, and the possibility of compounding degradations (such as noise, scratch/torn, occlusion, low-resolution, etc).
While published image restoration models are usually benchmarked on datasets with similar degradation type, we performed experiments on randomly scraped dataset of old and degraded black-and-white photos (which are then colorized and restored).
- Experiments to enhance the colorfulness of known robust colorization training (DeOldify) by fine-tuning the GAN training:
successfully shows the highest colorfulness score:
- Experiments to induce multimodality in the produced colors, by implementing this paper on mode-seeking GAN regularization (Mao et al. 2019), unfortunately still requires more fine-tuning:
- We then implement some of the latest papers on image restoration with state-of-the-art results, choose the best performing one, and implement a pipeline for an end-to-end restoration:
In summary, our fine tuning result (1) is able to produce more colorful result than baseline model and the latest image colorization models, both qualitatively and quantitatively. Our multimodal colorization (2) is able to produce interesting and diverse colors, but we have yet to be able to fine-tune the model to produce realistic looking result. Our adopted end-to-end pipeline (3) shows the capability to generalize to varying degree of restoration requirements which is present in randomly scraped old images.
Ubuntu 20.04, Intel Core i7 3770 and RTX 2080 Ti were used to run the experiments.
Dependencies can be installed by:
pip install -r requirements.txt
-
Clone this repository
-
Download all files from here to your local computer
-
Move both
.pth
files to./models
directory -
Move
FaceEnhancement-checkpoints.zip
to./Face_Enhancement
directory, then extract it -
Move
Global-checkpoints.zip
to./Global
directory, then extract it
-
Legacy-Large dataset can be downloaded from here, at
Dataset
folder. -
Alternatively, both training dataset and test dataset (legacy images) can be prepared by:
- Run
image_scraper.ipynb
- Download ImageNet dataset from here for training
- Run
preprocess.ipynb
to resize test images and create BW images
- Run
-
Best results for training and fine tuning can be viewed by running inference using
Colorizer_GANFineTune_bestmodel.pth
. Example inference is shown incolorize_test.ipynb
-
Results for Mode-Seeking GAN training can be viewed by running the
MSGAN_training.ipynb
file
The adopted final pipeline consists of preprocessing using Wan et al's (2020) method from here and feeding the result to the colorization model.
To test it, run joint_restoration_test.ipynb
To calculate colorfulness using Hasler et al's measurement, run calculate_colorfulness.ipynb
The final code implemented here are for research purposes only, and heavily based on the implementation from other repositories: