Skip to content
Snippets Groups Projects
Commit a747a5a9 authored by thomas.firmin19@gmail.com's avatar thomas.firmin19@gmail.com
Browse files

22_11

parent b53d145c
Branches
No related tags found
No related merge requests found
# Parallel hyperparameter optimization of spiking neural networks
> [Français](README_fr.md)
> [Back to main](https://gitlab.cristal.univ-lille.fr/tfirmin/mythesis)
This repository contains hyperlinks to redirect the reader to the source code of each chapter from the thesis [Parallel hyperparameter optimization of spiking neural networks](https://theses.fr/s327519).
......@@ -11,7 +9,7 @@ The thesis is accessible at (_available once published_):
## Chapter 5 - Silent networks: a vicious trap for Hyperparameter Optimization
Chapter 5 tackles HyperParameter Optimization (HPO) of Spiking Neural Networks (SNNs). A certain type of SNNs is described and named _silent network_. This is a generalization of the signal loss problem since the lack or absence of spiking activity can be explained by mistuned hyperparameters or network architecture. This chapter study the impact of silent networks on the HPO algorithm. Additionally, the HPO algorithm leverages silent networks via early stopping and blackbox constraints.
Chapter 5 tackles HyperParameter Optimization (HPO) of Spiking Neural Networks (SNNs). A certain type of SNNs is described and named _silent network_. This is a generalization of the signal loss problem since the lack or absence of spiking activity can be explained by mistuned hyperparameters or network architecture. This chapter studies the impact of silent networks on the HPO algorithm. Additionally, the HPO algorithm leverages silent networks via early stopping and blackbox constraints.
## Content
......
# Parallel hyperparameter optimization of spiking neural networks
> [Français](README_fr.md)
> [Back to main](https://gitlab.cristal.univ-lille.fr/tfirmin/mythesis)
This repository contains hyperlinks to redirect the reader to the source code of each chapter from the thesis [Parallel hyperparameter optimization of spiking neural networks](https://theses.fr/s327519).
The thesis is accessible at (_available once published_):
* [HAL]()
## Chapter 5 - Silent networks: a vicious trap for Hyperparameter Optimization
Chapter 5 tackles HyperParameter Optimization (HPO) of Spiking Neural Networks (SNNs). A certain type of SNNs is described and named _silent network_. This is a generalization of the signal loss problem since the lack or absence of spiking activity can be explained by mistuned hyperparameters or network architecture. This chapter study the impact of silent networks on the HPO algorithm. Additionally, the HPO algorithm leverages silent networks via early stopping and blackbox constraints.
## Content
* The file `search_spaces.py` contains all the search spaces for chapters 5 and 6.
* The file `load_dataset.py` contains dataloaders and encoders for different datasets.
* The folder `scbo` contains scripts of the experiments with the SCBO algorithm.
* The folder `turbo` contains scripts of the experiments with the TuRBO algorithm.
* The folder `sensitivity` contains scripts for the early stopping sensitivity analysis.
## Zellij
First install the frozen thesis version of **Zellij**.
Zellij is the main Python package made for this thesis, including both FBD and HPO algorithms.
The actual version of the thesis was frozen. The documentation is not up-to-date.
> [Github to Zellij](https://github.com/ThomasFirmin/zellij/tree/thesis_freeze)
## Simulators
Install local packages to instantiates the networks, and early stopping with [BindsNET](https://bindsnet-docs.readthedocs.io/), [LAVA-DL](https://lava-nc.org/), and [SpikingJellyy](https://spikingjelly.readthedocs.io/zh-cn/latest/#).
> BindsNET
```
$ python3 -m pip install -e ./code/Lie
```
> LAVA-DL
```
$ python3 -m pip install -e ./code/Lave
```
> SpikingJelly
```
$ python3 -m pip install -e ./code/SpikingJelly
```
## Run
The experiments were designed for the [Jean Zay supercomputer](http://www.idris.fr/jean-zay/jean-zay-presentation.html) on the distributed [V100 GPU partition](http://www.idris.fr/jean-zay/gpu/jean-zay-gpu-exec_partition_slurm.html).
```
$ srun python -u <PATH_TO_SCRIPT> --dataset <DATASET name> --mpi flexible --record_time --gpu --gpu_per_node 4 --data <# DATA POINT> --time <time budget in seconds>
```
> Example
```
$ srun python -u c_launch_scbo_mnist_lava.py --dataset MNIST --mpi flexible --record_time --gpu --gpu_per_node 4 --data 60000 --time 360000
```
## Authors and acknowledgment
* Author: Thomas Firmin
* Supervisor: El-Ghazali Talbi
* Co-Supervisor: Pierre Boulet
Experiments presented in this work were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}).
This work was granted access to the HPC resources of IDRIS under the allocation 2023-AD011014347 made by GENCI.
This work has been supported by the University of Lille, the ANR-20-THIA-0014 program AI\_PhD$@$Lille and the ANR PEPR AI and Numpex. It was also supported by IRCICA(CNRS and Univ. Lille USR-3380).
## License
CeCILL-C
\ No newline at end of file
# Parallel hyperparameter optimization of spiking neural networks
> [English](README_en.md)
> [Back to main](https://gitlab.cristal.univ-lille.fr/tfirmin/mythesis)
This repository contains hyperlinks to redirect the reader to the source code of each chapter from the thesis [Parallel hyperparameter optimization of spiking neural networks](https://theses.fr/s327519).
The thesis is accessible at (_available once published_):
* [HAL]()
## Chapter 4 - Partition-based global optimization
Chapter 4 describes a generalization of a family of algorithms, based on a hierarchical decomposition of the search space. We introduce Zellij a framework unifying various algorithms from different research fields. The chapter ends on a discussion about a new algorithm based on a decomposition via Latin Hypercubes.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment