A tensorflow implement of the TIP2017 paper [Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising](http://www4.comp.polyu.edu.hk/~cslzhang/paper/DnCNN.pdf)
# DnCNN-PyTorch
A PyTorch implement of the TIP2017 paper [Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising](http://www4.comp.polyu.edu.hk/~cslzhang/paper/DnCNN.pdf)
This project aims at automatically denoising phase images using a CNN residual filter.
## Prerequisites
## Prerequisites TODO
This project needs the following python3 packages
```
pip3 install arparse pathlib os sys PIL numpy scipy
```
tensorflow == 1.4.1
PyTorch == 1.7.1
## Parameters
Modifiable parameters are located in hparams.py
```
noise_src_dir = '/lium/raid01_c/tahon/holography/HOLODEEP/', #directory where are located Noisy input files
clean_src_dir = '/lium/raid01_c/tahon/holography/NOISEFREEHOLODEEP/', #directory where are located Clean reference target files
eval_dir = '/lium/raid01_c/tahon/holography/HOLODEEPmat/', #directory where are located images final evaluation
phase = 'train', #train or test phase
isDebug = False, #if True,create only 10 patches
originalsize = (1024,1024), #1024 for matlab database, 128 for holodeep database
phase_type = 'two', #(phi|two|cos|sin) : keep phase between -pi and pi (phi), convert into cosinus (cos) or sinus (sin), or convert randomly into cosinus or sinus for training.
#select images for training
train_patterns = [1, 2, 3, 5], #image patterns used for training the model
train_noise = [0, 1, 1.5], #noise level for training images
#select images for evaluation (during training)
eval_patterns = [4], #image pattern used for evaluation
test_noise = [0, 1, 1.5], #noise level values for test images
nb_layers = 16, #nb of layers of the DnCNN architecture default is 16 (16*3 in reality)
noise_type = 'spkl', #type of noise: speckle or gaussian (spkl|gauss), gaussian noise can be used for this project
sigma = 25, #noise level for gaussian denoising not used in this project
#Training
nb_layers = 4,#nb of intermediate convolutional layers (original size is 16)
batch_size = 64,#nb of patches per batch
patch_per_image = 350, # Silvio a utilisé 384 pour des images 1024*1024
patch_size = 50, #size of training images.
epoch = 200,#nb of epochs for training the model
lr = 0.0005, # learning rate
stride = 50, # spatial step for cropping images values from initial script 10
step = 0, #initial spatial setp for cropping
scales = [1] #[1, 0.9, 0.8, 0.7] # scale for data augmentation
```
Modifiable parameters are located in argument.py
## Localization of data and generation of patches
Data is either .tiff images or .mat MATLAB matrices. Three databases are available for the moment:
* HOLODEEPmat: MATLAB images for training and development purposes: 5 patterns and 5 noise level, 1024x1024
* DATAEVAL: 3 MATLAB images for evaluation purposes: data1, data20 and VibMap.
* DATAEVAL: 3 MATLAB images for evaluation purposes: Data1, Data20 and VibMap.
* NATURAL: 400 images in B&W for image denoising, noisy images are obtained with additive Gaussian noise: 180x180
Clean reference data is located in hparams.clean_src_dir
Noisy input data is located in hparams.noisy_src_dir
Images from HOLODEEP are referred by their name according to their pattern number (hparams.train_patterns from 1 to 5), their noise level (hparams.train_noise 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
Clean reference data is located in argument.clean_src_dir
Noisy input data is located in argument.noisy_src_dir
Images from HOLODEEP are referred by their name according to their pattern number (argument.train_patterns from 1 to 5), their noise level (hparams.train_noises 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
All phase data is converted using sine or cosine functions and normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the pahse amplitude between -pi and pi.
Patch size can be given as an argument in the command line either if images .tiff are all in the same directory, or if the .mat files are given in seperated directories.
This command generates two numpy matrices of size (nb_patch, patch_size, patch_size,1) one for noisy images, the other for clean references.
## Manual
You have to manualy download the holography database and put it in the directory.
The database can be found on skinner /info/etu/m1/s171085/Projets/Portage-Keras-PyTorch/Portage-reseau-de-neurones-de-Keras-vers-PyTorch/dncnn-tensorflow-holography-master/Holography (a new link will be available later)
## Train
To train the CNN, use the following command, where:
* checkpoint_dir is the directory where checkpoints will be saved (intermediate and final weights of the model)
* sample_dir is the directory where are located results files, a directory of the running test referred with the timestamp is created at each new run, denoised output images and res file are located in this directory
* params is optional parameters to change hparams.py in an other way.
* save_dir is where are located numpy matrices used for training the network (generated by the script genererate_patches_holo).
* Data augmentation (x8) is done before batch creation. It consists in considering cosine and sine version of the phase image, the transposed and phase shifted version (pi/4)
The application can be used with the main_holo.py script with different arguments from the argument.py script
To start a training with default param you can use the command
```
#launch a training
python3 main_holo.py
```
You can precise the training and eval data with the arguments noisy_train, clean_train, noisy_eval and clean_eval
The usable data are generated with the generate_patches_holo.py and generate_patches_holo_fromMAT.py scripts and saved in a directory named "data1".
All results and training information are summarized in res files which are stored in `sample_dir`directory together with predicted images (.tiff). To get the results over all epochs on the HOLODEEP database (evaluation step), the following command will help:
You can also precise the different hyperparameter for the training
num_epoch is the number of epoch the model will train
D is the number of res block
C is the kernel size for convolutional layer (not tested)
To test the model which is located here `/lium/raid01_c/tahon/holography/checkpoints/run-test2020-04-12_12\:14\:29.082341/, use one of these commands. The first one is the detailed Python command, while the second one, run the job on the cluster with sbatch (test on DATAEVAL and HOLODEEP).
For data adapation you have to give the size and the number of channel of the image you will be using in the argument image_size and image_mode (for training or testing).
```
#launch a training in which the image will be 50 by 50 in black and white
#launch a denoising operation on the 25 images of Holography/DATABASE and Holography/DATAEVAL/DATAEVAL database with the model experiment_xxx at the epoch 130