Commit d6c0ad84 authored by Marie Tahon's avatar Marie Tahon
Browse files

add benchamrk dir

parent 46fdc215
......@@ -16,34 +16,30 @@ Python == 3.9
PyTorch == 1.7.1
## Parameters
Modifiable parameters are located in argument.py
Use the following command to see thee different parameters you can use.
```
python3 main_holo.py --help
```
Modifiable parameters are located in the config file under YAML format.
An example is given in dataset.yaml
## Localization of data and generation of patches
Data is either .tiff images or .mat MATLAB matrices. Three databases are available for the moment:
Data is either .tiff images or .mat MATLAB matrices.
While tiff matrices contains discrete pixels values, matlab matrices contains directly the phase image.
Four databases are available for the moment:
* HOLODEEPmat: MATLAB images for training and development purposes: 5 patterns and 5 noise level, 1024x1024
* DATAEVAL: 3 MATLAB images for evaluation purposes: Data1, Data20 and VibMap.
* DB99: 128 MATLAB images , 1024x1024
* NATURAL: 400 images in B&W for image denoising, noisy images are obtained with additive Gaussian noise: 180x180
Clean reference data is located in argument.clean_src_dir
Noisy input data is located in argument.noisy_src_dir
Images from HOLODEEP are referred by their name according to their pattern number (argument.train_patterns from 1 to 5), their noise level (hparams.train_noises 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
All phase data is converted using sine or cosine functions and normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the pahse amplitude between -pi and pi.
Patch size can be given as an argument in the command line either if images .tiff are all in the same directory, or if the .mat files are given in seperated directories.
The train/dev/test datasets are explicited in a csv file containing the following header and the list of corresponding image lcoations
```
python3 generate_patches_holo.py --patch_size 100
python3 generate_patches_holo_fromMat.py
clean, noisy, Ns
NOISEFREE.tiff,NOISY.tiff,0
```
This command generates two numpy matrices of size (args.batch_size * nb_image, patch_size, patch_size, 1) one for noisy images, the other for clean references.
The Ns value corresponds to the speckle grain size used to generate the noisy image from the clean image.
Images from HOLODEEP are referred by their name according to their pattern number (argument.train_patterns from 1 to 5), their noise level (hparams.train_noises 0, 1, 1.5, 2 or 2.5). Images for train and developpement (or test) and final evaluation (eval) are given by "train", "test" or "eval" suffix.
All phase data is converted using sine or cosine functions and normalized between 0 and 1 for being in agreement with the output of the network. A propotional coefficient is applied on the predicted image to rescale the phase amplitude between -pi and pi.
Patch size is given as an argument to the DataLoader directly in the config file.
## Manual
In order to train models you have to manualy download an holography database and give the path to that directory for training database (args.train_dir) evaluating database (args.eval_dir) and testing database (args.test_dir).
......@@ -54,69 +50,92 @@ The application is used with the main_holo.py script with different arguments fr
To start a training with default parameters you can use the command.
```
#launch a training
python3 main_holo.py
python main_holo.py --config dataset.yaml
```
You can precise the training data with the arguments noisy_train, clean_train. Ans the eval data with the arguments eval_patterns and eval_noises.
The data are generated with the generate_patches_holo.py and generate_patches_holo_fromMAT.py scripts and saved in a directory named "data1".
```
#launch a training with the following data
python3 main_holo.py
--noisy_train data1/img_noisy_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy
--clean_train data1/img_clean_train_1-2-3-4-5_0-1-1.5-2-2.5_two_50_50_384.npy
--eval_patterns 1 2 3 --eval_noises "0-1-1.5"
You can precise how training data is pre-processed in the DataLoader.
```
# Training set
train:
path: "../XP_holography/CORPUS/DATABASE/" # folder where the training images are located according to the csv file
csv: "holodeep_mat.csv" # list of the training images (clean, noisy, Ns)
extension: "mat" # format of the training images (matlab or tiff)
matlab_key_noisy: "NoisyPhase" # key value for noisy images in case of matlab files. In case of tiff set to null
matlab_key_clean: "Phase" # key value for noisy images in case of matlab files. In case of tiff set to null
You can also precise the different hyperparameters for the training
num_epoch is the number of epoch the model will train
D is the number of res block
C is the kernel size for convolutional layer (not tested)
```
#launch a training with the following params
python3 main_holo.py --num_epoch 500 --D 16 --C 32
patch: # parameters for patch extraction
nb_patch_per_image: 2 # number of patches per image
size: 50 # size of the squared patches
stride: 50 # stride between two consecutive patches
step: 0 # step at the begining and end of each image
augmentation: "add45,add45transpose,cossin,flip,rot90"
# data augmentation function applied on all patches.
```
The possible data augmentation functions are the followings:
* *add45*: adds an angle of pi/4 to the current phase
* *transpose*: transposes the patch in the phase domain
* *cossin*: compute the cosine and sine transforms of the phase values. If no, compute the cosine or sine transforms
* *flip*: flip up down the cosine or sine image
* *rot90* (90, 180, 270): rotate the cosine or sine image of 90 degrees (or 180, or 270)
* *add45transpose*: combines both augmentation functions
For data adapation you have to give the size and the number of channel of the image you will be using in the argument (train|eval|test)_image_size and image_mode.
You can precise in the config file the different hyperparameters for the training:
```
#launch a training in which the image will be 50 by 50 in black and white
python3 main_holo.py --train_image_size 50 50 --image_mode 1
# training parameters
model:
batch_size: 64 # number of patches in each batch
num_epochs: 2 # number of epoch the model will train
D: 4 # number ores block
C: 64 # kernel size for convolutional layer (not tested)
lr: 0.001 # learning rate
input_dir: "./PyTorchCheckpoint/"
# folder where are saved the models during training
output_dir: null # directory of saved checkpoints for denoising operation or retraining
start_epoch: null #'epoch\'s number from which we going to retrain' if > 0 a model has to be saved in the input_dir
freq_save: 1 # number of epochs needed before saving a model
perform_validation: True
# perform validation on the validation set at each freq_save epoch.
graph: True # plot the loss function according to epochs.
```
The arguments input_dir and epoch are used for re-training and de-noising operation.
In input_dir give the path to the model you want to use, and in epoch give the number from which you want to re-train or do a de-noising operation.
The new model will be saved in the input_dir directory.
```
#re launch a training strating from the model experiment_xxx at the epoch 130
python3 main_holo.py --input_dir PyTorchExperiments/experiment_xxx --epoch 130
model:
input_dir: "./PyTorchExperiments/experiment_xxx/"
output_dir: null # directory of saved checkpoints for denoising operation or retraining
start_epoch: 130 #'epoch\'s number from which we going to retrain' if > 0 a model has to be saved in the input_dir
```
To do a de-noising operation you have to use the test_mode argument.
You can use the argument test_noisy_img, test_noisy_key, test_clean_img and test_clean_key to precise which image you want to de-noise
The clean image will only be used to give the level of remaining noise after the de-noising operation. If you don't have a clean reference just give the noisy reference again.
```
#launch a denoising operation on the image DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat
#with the model experiment_xxx at the epoch 130
python3 main_holo.py --test_mode
--test_noisy_img Holography/DATAEVAL/DATAEVAL/DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat
--test_noisy_key 'Phaseb'
--test_clean_img Holography/DATAEVAL/DATAEVAL/DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat
--test_clean_key 'Phase'
--input_dir PyTorchExperiments/experiment_xxx --epoch 130
python main_holo.py --config dataset.yaml --test_mode
#launch a denoising operation on the image test.jpg with the model experiment_xxx at the epoch 130
python3 main_holo.py --test_mode
--test_noisy_img test.jpg
--test_clean_img test.jpg
--input_dir PyTorchExperiments/experiment_xxx --epoch 130
```
#launch a denoising operation on the image DATA_1_Phase_Type1_2_0.25_1.5_4_50.mat with 3 iterations
#with the model experiment_xxx at the epoch 130
model:
input_dir: "./PyTorchExperiments/experiment_xxx/"
output_dir: null # directory of saved checkpoints for denoising operation or retraining
start_epoch: 130 #'epoch\'s number from which we going to retrain' if > 0 a model has to be saved in the input_dir
# Test set
test:
path: "../XP_holography/"
csv: "holodeep_test.csv"
extension: "tiff"
matlab_key_noisy: "NoisyPhase"
matlab_key_clean: "Phase"
nb_iteration: 3
save_test_dir: "./TestImages/" # directory of saved test images after denoising operation
If you do not give an image to de-noise, an evaluation of the entire training and testing database will be done start.
```
#launch a denoising operation on the 25 images of Holography/DATABASE and Holography/DATAEVAL/DATAEVAL
#database with the model experiment_xxx at the epoch 130
python3 main_holo.py --test_mode --input_dir PyTorchExperiments/experiment_xxx --epoch 130
```
The results of those de-noising operation can be found in a TestImages directory
## Results
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment