Commit d28f17e9 authored by Marie Tahon's avatar Marie Tahon
Browse files

update README

parent 97ac6c5f
# DnCNN-tensorflow
[![AUR](https://img.shields.io/aur/license/yaourt.svg?style=plastic)](LICENSE)
[![Docker Automated build](https://img.shields.io/docker/automated/jrottenberg/ffmpeg.svg?style=plastic)](https://hub.docker.com/r/wenbodut/dncnn/)
[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=plastic)](CONTRIBUTING.md)
A tensorflow implement of the TIP2017 paper [Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising](http://www4.comp.polyu.edu.hk/~cslzhang/paper/DnCNN.pdf)
This project aims at automatically denoising phase images using a CNN residual filter.
## Model Architecture
![graph](./img/model.png)
## Results
![compare](./img/compare.png)
- BSD68 Average Result
The average PSNR(dB) results of different methods on the BSD68 dataset.
| Noise Level | BM3D | WNNM | EPLL | MLP | CSF |TNRD | DnCNN-S | DnCNN-B | DnCNN-tensorflow |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 25 | 28.57 | 28.83 | 28.68 | 28.96 | 28.74 | 28.92 | **29.23** | **29.16** | **29.17** |
- Set12 Average Result
| Noise Level | DnCNN-S | DnCNN-tensorflow |
|:-----------:|:-------:|:----------------:|
| 25 | 30.44 | **30.38** |
For the dataset and denoised images, please download [here](https://drive.google.com/open?id=16x8E7h0srYQliXbrO0pvX6zogfW1hN2P)
## Environment
### :whale: With docker (recommended):
- Install docker support
You may do it like this(ubuntu):
``` shell
$ sudo apt-get install -y curl
$ curl -sSL https://get.docker.com/ | sh
$ sudo usermod -aG docker ${USER}
## Prerequisites
This project needs the following python3 packages
```
- Install nvidia-docker support(to make your GPU available to docker containers)
You may do it like this(ubuntu):
```shell
$ wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
$ sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
pip3 install arparse pathlib os sys PIL numpy scipy
```
tensorflow == 1.4.1
- Pull dncnn image and start a container
```shell
$ docker pull wenbodut/dncnn
$ ./rundocker.sh
```
Then you could train the model.
### Without docker:
You should make sure the following environment is contented
## Parameters
Modifiable parameters are located in hparams.py
```
tensorflow == 1.4.1
numpy
noise_src_dir = '/lium/raid01_c/tahon/holography/HOLODEEP/', #directory where are located Noisy input files
clean_src_dir = '/lium/raid01_c/tahon/holography/NOISEFREEHOLODEEP/', #directory where are located Clean reference target files
eval_dir = '/lium/raid01_c/tahon/holography/HOLODEEPmat/', #directory where are located images final evaluation
phase = 'train', #train or test phase
isDebug = False, #if True,create only 10 patches
originalsize = (1024,1024), #1024 for matlab database, 128 for holodeep database
phase_type = 'two', #(phi|two|cos|sin) : keep phase between -pi and pi (phi), convert into cosinus (cos) or sinus (sin), or convert randomly into cosinus or sinus for training.
#select images for training
train_patterns = [1, 2, 3, 5], #image patterns used for training the model
train_noise = [0, 1, 1.5], #noise level for training images
#select images for evaluation (during training)
eval_patterns = [4], #image pattern used for evaluation
eval_noise = [0, 1, 1.5, 2, 2.5], #noise level for evaluation images
#select images for testing
test_patterns = [5], #image pattern used for test
test_noise = [0, 1, 1.5], #noise level values for test images
noise_type = 'spkl', #type of noise: speckle or gaussian (spkl|gauss), gaussian noise can be used for this project
sigma = 25, #noise level for gaussian denoising not used in this project
#Training
batch_size = 64,#nb of patches per batch
patch_per_image = 350, # Silvio a utilisé 384 pour des images 1024*1024
patch_size = 50, #size of training images.
epoch = 200,#nb of epochs for training the model
lr = 0.0005, # learning rate
stride = 50, # spatial step for cropping images values from initial script 10
step = 0, #initial spatial setp for cropping
scales = [1] #[1, 0.9, 0.8, 0.7] # scale for data augmentation
```
## Localization of data and generation of patches
Data is either .tiff images or .mat MATLAB matrices.
Clean reference data is located in hparams.clean_src_dir
Noisy input data is located in hparams.noisy_src_dir
Images are referred by their name according to their pattern number (hparams.train_patterns from 1 to 5), their noise level (hparams.train_noise 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
## One-Key-To-Denoise
```
$ ./oneKeyToDenoise.sh
(need docker support)
```
Then you could find the noisy Set12 images and denoised images in test folder. Have fun!
All phase data is normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the pahse amplitude between -pi and pi.
## Train
Patch size can be given as an argument in the command line either if images .tiff are all in the same directory, or if the .mat files are given in seperated directories.
```
$ python generate_patches.py
$ python main.py
(note: You can add command line arguments according to the source code, for example
$ python main.py --batch_size 64 )
python3 generate_patches_holo.py --params 'patch_size=100'
python3 generate_patches_holo_fromMat.py --params 'patch_size=100'
```
This command generates two numpy matrices of size (nb_patch, patch_size, patch_size,1) one for noisy images, the other for clean references.
For the provided model, it took about 4 hours in GTX 1080TI.
Here is my training loss:
**Note**: This loss figure isn't suitable for this trained model any more, but I don't want to update the figure :new_moon_with_face:
## Train
To train the CNN, use the following command, where:
* checkpoint_dir is the directory where checkpoints will be saved (intermediate and final weights of the model)
* sample_dir is the directory where are located results files, a directory of the running test referred with the timestamp is created at each new run, denoised output images and res file are located in this directory
* params is optional parameters to change hparams.py in an other way.
* save_dir is where are located numpy matrices used for training the network.
```
python3 main_holo.py --checkpoint_dir /lium/raid01_c/tahon/holography/checkpoints/ --sample_dir /lium/raid01_c/tahon/holography/eval_samples/ --params "phase_type=phi" --save_dir "./data1/"
```
![loss](./img/loss.png)
## Test
```
$ python main.py --phase test
$ python main_holo.py --phase test
```
## TODO
- [x] Fix bug #13. (bug #13 fixed, thanks to @sdlpkxd)
- [x] Clean source code. For instance, merge similar functions(e.g., 'load_images 'and 'load_image' in utils.py).
- [x] Add one-key denoising, with the help of docker.
- [x] Compare with original DnCNN.
- [x] Replace tf.nn with tf.layer.
- [ ] Replace PIL with OpenCV.
- [ ] Try tf.dataset API to speed up training process.
- [ ] Train a noise level blind model.
## Thanks for their contributions
- @lizhiyuanUSTC
- @husqin
- @sdlpkxd
- and so on ...
- [ ] check final evaluation phase
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment