Skip to content
Snippets Groups Projects
Select Git revision
  • 892d997d5cad705d45fff8bffcc266b7859b9de1
  • main default protected
2 results

thesis_code_chap5

  • Clone with SSH
  • Clone with HTTPS
  • Thomas Firmin's avatar
    Thomas Firmin authored
    892d997d
    History
    Name Last commit Last update
    code
    .gitignore
    README.md
    data.zip

    Parallel hyperparameter optimization of spiking neural networks

    Back to main

    This repository contains hyperlinks to redirect the reader to the source code of each chapter from the thesis Parallel hyperparameter optimization of spiking neural networks.

    The thesis is accessible at (available once published):

    Chapter 5 - Silent networks: a vicious trap for Hyperparameter Optimization

    Chapter 5 tackles HyperParameter Optimization (HPO) of Spiking Neural Networks (SNNs). A certain type of SNNs is described and named silent network. This is a generalization of the signal loss problem since the lack or absence of spiking activity can be explained by mistuned hyperparameters or network architecture. This chapter studies the impact of silent networks on the HPO algorithm. Additionally, the HPO algorithm leverages silent networks via early stopping and blackbox constraints.

    Content

    • The file search_spaces.py contains all the search spaces for chapters 5 and 6.
    • The file load_dataset.py contains dataloaders and encoders for different datasets.
    • The folder scbo contains scripts of the experiments with the SCBO algorithm.
    • The folder turbo contains scripts of the experiments with the TuRBO algorithm.
    • The folder sensitivity contains scripts for the early stopping sensitivity analysis.

    Zellij

    First install the frozen thesis version of Zellij.

    Zellij is the main Python package made for this thesis, including both FBD and HPO algorithms. The actual version of the thesis was frozen. The documentation is not up-to-date.

    Github to Zellij

    Simulators

    Install local packages to instantiates the networks, and early stopping with BindsNET, LAVA-DL, and SpikingJellyy.

    BindsNET

    $ python3 -m pip install -e ./code/Lie

    LAVA-DL

    $ python3 -m pip install -e ./code/Lave

    SpikingJelly

    $ python3 -m pip install -e ./code/SpikingJelly

    Run

    The experiments were designed for the Jean Zay supercomputer on the distributed V100 GPU partition.

    $ srun python -u <PATH_TO_SCRIPT> --dataset <DATASET name> --mpi flexible --record_time --gpu --gpu_per_node 4 --data <# DATA POINT> --time <time budget in seconds>

    Example

    $ srun python -u c_launch_scbo_mnist_lava.py --dataset MNIST --mpi flexible --record_time --gpu --gpu_per_node 4 --data 60000 --time 360000 

    Authors and acknowledgment

    • Author: Thomas Firmin
    • Supervisor: El-Ghazali Talbi
    • Co-Supervisor: Pierre Boulet

    Experiments presented in this work were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}).

    This work was granted access to the HPC resources of IDRIS under the allocation 2023-AD011014347 made by GENCI.

    This work has been supported by the University of Lille, the ANR-20-THIA-0014 program AI_PhD

    @
    Lille and the ANR PEPR AI and Numpex. It was also supported by IRCICA(CNRS and Univ. Lille USR-3380).

    License

    CeCILL-C