### Before reading that notebook please follow the instructions of the file [INSTALL.md](./INSTALL.md)
%% Cell type:markdown id: tags:
## I - How to launch a simulation ?
- All you need is to precise some hyperparameters relative to the experiment. Please find below the list of hyperparameters you have to give according to the setup (note that the function `initialize_hyperparameters` can simplify this task) :
| Name of the hyperparameter | Description | Default value (ours) |
| seed | The seed used to make the training reproducible | 2021 |
| N_fold | The number of fold you use for your cross validation | 3 |
| im_size | The size of the patches used for the training phases | 128 |
| max_epochs | The maximal number of epochs for the training phases | 30 |
| earlystop_patience | The maximal number of epochs we wait before the earlystopping | 5 |
| lr | The initial learning rate for our training phases | 0.0001 |
| train_batch_size | The size of the batch size used during the training phases | 128 |
| eval_batch_size | The size of the batch size used during the evaluation phases | 512 |
| detector_name | The name of the forgery detector you use | 'Bayar' |
| source_path | The filename of your source domain | 'source-none.hdf5' |
| target_path | The filename of your target domain | 'target-qf(5).hdf5' |
| source_name | Name of the source domain (deduced from source_path) | 'source-none' |
| target_name | Name of the target domain (deduce from target_path) | 'target-qf(5)' |
| setup | The setup you consider for your experiment | 'SrcOnly' |
| domain paths | The filenames of the domains used for the evaluation phases | ["target-qf(5).hdf5", "target-qf(10).hdf5", "target-qf(20).hdf5", "target-qf(50).hdf5", "target-qf(100).hdf5", "target-none.hdf5"] |
| domain_names | The name of the domains for the evaluation phases (deduced from domain_path) | ["qf(5)", "qf(10)", "qf(20)", "qf(50)", "qf(100)", "none"] |
| nb_source_max | The maximal number of patches you want to use for the source during training | 10**(8) |
| nb_target_max | The maximal number of patches you want to use for the target during training | 10**(8) |
| save_at_each_epoch | if True, for your first fold only, the weights of the detector will be saved at each epoch | true |
| precisions | Some precisions about the experiments (deduced from source_path and target_path) | s=none_t=qf(5) |
For what follows, note that the source and target filenames are stored in the list `sources` and `targets` implicitly imported above
%% Cell type:code id: tags:
``` python
print(sources)
print(targets)
```
%% Cell type:markdown id: tags:
- Example 1 : We want to test the Experiment `SrcOnly_s=none_t=qf(5)`
- Example 3 : We want to test the Experiment `Update(sigma=8)_s=None_t=qf(5)`
*For that we need to precise also the bandwiths parameter at the level of each final dense layer. This is possible with an extra hyperparameters 'sigmas' that you need to add*
*You can also precise in 'details' that you choose a specific bandwith for your experiment so that it appeared in the name of the file containing the results*
- Example 4 : We want to test the Experiment `Update(sigmas=[2,3,4])_s=None_t=qf(5)_N_t=1000 `
*For that we need to precise also the bandwiths parameter at the level of each final dense layer and change the default value of nb_target_max.*
*You can also precise in 'details' that you choose specific bandwiths for your experiment so that it appeared in the name of the file containing the results*
## II - Can I reproduce the nice gif you gave in the Readme to see what is going one for each experiment ?
%% Cell type:code id: tags:
``` python
importtorch
importimageio
```
%% Cell type:markdown id: tags:
Of course ! Setting the key `save_at_each_epoch` to True enables to save the weights of your detector at each epoch for the first training phase (first fold).
When you have all the weights, you can use the function below.
It requires use to install imageio doing `pip install imageio`.
Moreover, you need before to obtain a batch and its associated labels from your domain
@@ -24,6 +24,8 @@ To be able to reproduce our experiments and do your own ones, please follow our
To have a quick idea of the adaptation impact on the training phase, we selected a batch of size 512 from the target and, we represented the evolution of the final embeddings distributions from this batch during the training according to the setups **SrcOnly** and **Update($`\sigma=8`$)**
described in the paper. The training relative to the SrcOnly setup is on the left meanwhile the one relative to **Update($`\sigma=8`$)** is on the right.
**Don't hesitate to click on the gif below to see it better !**