Commit 1e5e2f0c authored by Mano Brabant's avatar Mano Brabant
Browse files

Update README.md

parent fd2c39b9
# DnCNN-tensorflow # DnCNN-PyTorch
A tensorflow implement of the TIP2017 paper [Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising](http://www4.comp.polyu.edu.hk/~cslzhang/paper/DnCNN.pdf) A PyTorch implement of the TIP2017 paper [Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising](http://www4.comp.polyu.edu.hk/~cslzhang/paper/DnCNN.pdf)
This project aims at automatically denoising phase images using a CNN residual filter. This project aims at automatically denoising phase images using a CNN residual filter.
## Prerequisites ## Prerequisites TODO
This project needs the following python3 packages This project needs the following python3 packages
``` ```
pip3 install arparse pathlib os sys PIL numpy scipy pip3 install arparse pathlib os sys PIL numpy scipy
``` ```
tensorflow == 1.4.1 PyTorch == 1.7.1
## Parameters ## Parameters
Modifiable parameters are located in hparams.py Modifiable parameters are located in argument.py
```
noise_src_dir = '/lium/raid01_c/tahon/holography/HOLODEEP/', #directory where are located Noisy input files
clean_src_dir = '/lium/raid01_c/tahon/holography/NOISEFREEHOLODEEP/', #directory where are located Clean reference target files
eval_dir = '/lium/raid01_c/tahon/holography/HOLODEEPmat/', #directory where are located images final evaluation
phase = 'train', #train or test phase
isDebug = False, #if True,create only 10 patches
originalsize = (1024,1024), #1024 for matlab database, 128 for holodeep database
phase_type = 'two', #(phi|two|cos|sin) : keep phase between -pi and pi (phi), convert into cosinus (cos) or sinus (sin), or convert randomly into cosinus or sinus for training.
#select images for training
train_patterns = [1, 2, 3, 5], #image patterns used for training the model
train_noise = [0, 1, 1.5], #noise level for training images
#select images for evaluation (during training)
eval_patterns = [4], #image pattern used for evaluation
eval_noise = [0, 1, 1.5, 2, 2.5], #noise level for evaluation images
#select images for testing
test_patterns = [5], #image pattern used for test
test_noise = [0, 1, 1.5], #noise level values for test images
nb_layers = 16, #nb of layers of the DnCNN architecture default is 16 (16*3 in reality)
noise_type = 'spkl', #type of noise: speckle or gaussian (spkl|gauss), gaussian noise can be used for this project
sigma = 25, #noise level for gaussian denoising not used in this project
#Training
nb_layers = 4,#nb of intermediate convolutional layers (original size is 16)
batch_size = 64,#nb of patches per batch
patch_per_image = 350, # Silvio a utilisé 384 pour des images 1024*1024
patch_size = 50, #size of training images.
epoch = 200,#nb of epochs for training the model
lr = 0.0005, # learning rate
stride = 50, # spatial step for cropping images values from initial script 10
step = 0, #initial spatial setp for cropping
scales = [1] #[1, 0.9, 0.8, 0.7] # scale for data augmentation
```
## Localization of data and generation of patches ## Localization of data and generation of patches
Data is either .tiff images or .mat MATLAB matrices. Three databases are available for the moment: Data is either .tiff images or .mat MATLAB matrices. Three databases are available for the moment:
* HOLODEEPmat: MATLAB images for training and development purposes: 5 patterns and 5 noise level, 1024x1024 * HOLODEEPmat: MATLAB images for training and development purposes: 5 patterns and 5 noise level, 1024x1024
* DATAEVAL: 3 MATLAB images for evaluation purposes: data1, data20 and VibMap. * DATAEVAL: 3 MATLAB images for evaluation purposes: Data1, Data20 and VibMap.
* NATURAL: 400 images in B&W for image denoising, noisy images are obtained with additive Gaussian noise: 180x180 * NATURAL: 400 images in B&W for image denoising, noisy images are obtained with additive Gaussian noise: 180x180
Clean reference data is located in hparams.clean_src_dir Clean reference data is located in argument.clean_src_dir
Noisy input data is located in hparams.noisy_src_dir Noisy input data is located in argument.noisy_src_dir
Images from HOLODEEP are referred by their name according to their pattern number (hparams.train_patterns from 1 to 5), their noise level (hparams.train_noise 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix. Images from HOLODEEP are referred by their name according to their pattern number (argument.train_patterns from 1 to 5), their noise level (hparams.train_noises 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
All phase data is converted using sine or cosine functions and normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the pahse amplitude between -pi and pi. All phase data is converted using sine or cosine functions and normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the pahse amplitude between -pi and pi.
Patch size can be given as an argument in the command line either if images .tiff are all in the same directory, or if the .mat files are given in seperated directories. Patch size can be given as an argument in the command line either if images .tiff are all in the same directory, or if the .mat files are given in seperated directories.
``` ```
python3 generate_patches_holo.py --params 'patch_size=100' python3 generate_patches_holo.py --patch_size 100
python3 generate_patches_holo_fromMat.py --params 'patch_size=100' python3 generate_patches_holo_fromMat.py --patch_size 100
``` ```
This command generates two numpy matrices of size (nb_patch, patch_size, patch_size,1) one for noisy images, the other for clean references. This command generates two numpy matrices of size (nb_patch, patch_size, patch_size,1) one for noisy images, the other for clean references.
## Manual
You have to manualy download the holography database and put it in the directory.
The database can be found on skinner /info/etu/m1/s171085/Projets/Portage-Keras-PyTorch/Portage-reseau-de-neurones-de-Keras-vers-PyTorch/dncnn-tensorflow-holography-master/Holography (a new link will be available later)
## Train The application can be used with the main_holo.py script with different arguments from the argument.py script
To train the CNN, use the following command, where: To start a training with default param you can use the command
* checkpoint_dir is the directory where checkpoints will be saved (intermediate and final weights of the model) ```
* sample_dir is the directory where are located results files, a directory of the running test referred with the timestamp is created at each new run, denoised output images and res file are located in this directory #launch a training
* params is optional parameters to change hparams.py in an other way. python3 main_holo.py
* save_dir is where are located numpy matrices used for training the network (generated by the script genererate_patches_holo). ```
* Data augmentation (x8) is done before batch creation. It consists in considering cosine and sine version of the phase image, the transposed and phase shifted version (pi/4)
You can precise the training and eval data with the arguments noisy_train, clean_train, noisy_eval and clean_eval
The usable data are generated with the generate_patches_holo.py and generate_patches_holo_fromMAT.py scripts and saved in a directory named "data1".
``` ```
python main_holo.py --checkpoint_dir /lium/raid01_c/tahon/holography/checkpoints/ --sample_dir /lium/raid01_c/tahon/holography/eval_samples/ --params "phase=train" --save_dir "./data1/" #launch a training with the following data
python3 main_holo.py --noisy_train data1/img_noisy_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy --clean_train data1/img_clean_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy --noisy_eval data1/img_noisy_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy --clean_eval data1/img_clean_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy
``` ```
All results and training information are summarized in res files which are stored in `sample_dir`directory together with predicted images (.tiff). To get the results over all epochs on the HOLODEEP database (evaluation step), the following command will help: You can also precise the different hyperparameter for the training
num_epoch is the number of epoch the model will train
D is the number of res block
C is the kernel size for convolutional layer (not tested)
``` ```
python parse_res_file.py /lium/raid01_c/tahon/holography/eval_samples/run-test2020-04-12_12\:14\:29.082341/res #launch a training with the following params
NumEpochs: 150 python3 main_holo.py --num_epoch 500 --D 16 --C 32
BestEpoch 140
avg 0,040 55,540
0,0 0,024 58,080
1,0 0,030 56,380
1,5 0,038 55,500
2,0 0,050 54,340
2,5 0,062 53,400
``` ```
## Test For data adapation you have to give the size and the number of channel of the image you will be using in the argument image_size and image_mode (for training or testing).
To test the model which is located here `/lium/raid01_c/tahon/holography/checkpoints/run-test2020-04-12_12\:14\:29.082341/, use one of these commands. The first one is the detailed Python command, while the second one, run the job on the cluster with sbatch (test on DATAEVAL and HOLODEEP). ```
#launch a training in which the image will be 50 by 50 in black and white
python3 main_holo.py --image_size 50 50 --image_mode 1
``` ```
python main_holo.py --params "phase=test" --test_noisy_img /lium/raid01_c/tahon/holography/HOLODEEPmat/PATTERN1/MFH_0/NoisyPhase.mat --test_noisy_key 'NoisyPhase' --test_clean_img /lium/raid01_c/tahon/holography/HOLODEEPmat/PATTERN1/PhaseDATA.mat --test_clean_key 'Phase' --test_flip False --test_ckpt_index /lium/raid01_c/tahon/holography/checkpoints/run-test2020-04-12_12\:14\:29.082341/
sbatch /run_holo_test.sh /lium/raid01_c/tahon/holography/checkpoints/run-test2020-04-12_12\:14\:29.082341/ The arguments input_dir and epoch are used for re-training and de-noising operation.
In input_dir give the path to the model you want to use, and in epoch give the number from which you want to re-train or do a de-noising operation.
The model are saved in a directory named "PyTorchExperiments"
```
#re launch a training strating from the model experiment_xxx at the epoch 130
python3 main_holo.py --input_dir PyTorchExperiments/experiment_xxx --epoch 130
```
To do a de-noising operation you can use the test_mode argument.
You can use the argument test_noisy_img, test_noisy_key, test_clean_img and test_clean_key to precise which image you want to de-noise
```
#launch a denoising operation on the image DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat with the model experiment_xxx at the epoch 130
python3 main_holo.py --test_mode --test_noisy_img Holography/DATAEVAL/DATAEVAL/DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat --test_noisy_key 'Phaseb' --test_clean_img Holography/DATAEVAL/DATAEVAL/DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat --test_clean_key 'Phase' --input_dir PyTorchExperiments/experiment_xxx --epoch 130
``` ```
## Results If you do not give an image to de-noise an evaluation of the entire training and testing database you start.
Results obtained with different configurations. Values are given in terms of Phase STD
Durations are given in Day-Hours:Min:Sec with one GPU.
``` ```
| PATTERNS 1-5, NOISE: | || TEST DATA #launch a denoising operation on the 25 images of Holography/DATABASE and Holography/DATAEVAL/DATAEVAL database with the model experiment_xxx at the epoch 130
Epochs | lr | train noise | nb batches | layers | duration | 0 | 1 | 1.5 | 2 | 2.5 | MOY || DATA1 | DATA20 | VibMap || python3 main_holo.py --test_mode --input_dir PyTorchExperiments/experiment_xxx --epoch 130
200 | 0.0005 | [0] | 120 | 4 | 5-10:22:46 | 0.05 | 0.06 | 0.08 | 0.11 | 0.14 | 0.09 || 0.18 | 0.67 | 0.13 ||
200 | 0.0005 | [0, 1, 1.5] | 360 | 4 | 5-16:22:11 | 0.04 | 0.05 | 0.07 | 0.09 | 0.11 | 0.07 || 0.15 | 0.66 | 0.12 ||
``` ```
## TODO The results of those de-noising operation can be found in a TestImages directory
- [x] check final evaluation phase
- [ ] check loading checkpoint (but not latest) from a previous model
- [ ] move to PyTorch
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment