Yuichiro Iwamoto1†, Benjamin Salmon2†, Yusuke Yoshioka3,Ryosuke Kojima 4, Alexander Krull2* and Sadao Ota1*
1Research Center for Advanced Science and Technology, The University of Tokyo, Komaba 4-6-1, Meguro, Tokyo 153-8904, Japan.
2School of Computer Science, University of Birmingham, B15 2TT, Birmingham, United Kingdom.
3Department of Molecular and Cellular Medicine, Institute of Medical Science, Tokyo Medical University, Nishishinjuku 6-7-1, Shinjuku, Tokyo 160-0023, Japan.
4Graduate School of Medicine, The University of Tokyo, Hongo7-3-1, Bunkyo, Tokyo 113-0033, Japan.
The introduction of unsupervised methods in denoising has shown that unpaired noisy data can be used to train denoising networks, which can not only produce high quality results but also enable us to sample multiple possible diverse denoising solutions. However, these systems rely on a probabilistic description of the imaging noise--a noise model. Until now, imaging noise has been modelled as pixel-independent in this context. While such models often capture shot noise and readout noise very well, they are unable to describe many of the complex patterns that occur in real life applications. Here, we introduce a novel learning-based autoregressive noise model to describe imaging noise and show how it can enable unsupervised denoising for settings with complex structured noise patterns. We explore different ways to train a model for real life imaging noise and show that our deep autoregressive noise model has the potential to greatly improve denoising quality in structured noise datasets. We showcase the capability of our approach on various simulated datasets and on real photo-acoustic imaging data.
Code for the publication Deep Nanometry: An optofluidic high-throughput nanoparticle analyzer with enhanced sensitivity via unsupervised deep learning-based denoising.
We recommend installing the dependencies in a conda environment. If you haven't already, install miniconda on your system by following this link.
Once conda is installed, create and activate an environment by entering these lines into a command line interface:
conda create --name dnm
conda activate dnm
Next, install PyTorch and torchvision for your system by following this link.
After that, you're ready to install the dependencies for this repository:
pip install lightning jupyterlab matplotlib tifffile scikit-image tensorboard pandas seaborn scikit-learn
Tested on:
Windows 11(23H2), Python 3.11.5 (lightning 2.2.1, jupyterlab 3.6.4, matplotlib 3.7.2, tifffile 2023.4.12, scikit-image 0.22.0 ,tensorboard 2.16.2)
The installation process should only take a few minutes.
The 'examples' directory contains notebooks for denoising sample data and carrying out the analyses in the paper. Their outputs are what should be expected and they will take in total around an hour to run. Running these files will require a CUDA enabled GPU with at least 2GB of VRAM.