Tackling Avalanches in Spiking Neural Network Accelerators
Spiking neural networks have become very promising approach to solve today’s machine learning tasks. Many accelerators have been proposed both from industry and academia. The vast majority of the architectures uses a multi-core like system architecture, interconnected by a Network-on-Chip (NoC). One very common challenge in these accelerators is the property of the neural networks to generate a large number of spikes at once, called an avalanche. This is also found in biology. Surprisingly few works on this issue exist even though it challenges the NoC with a huge burden of network load.
In this master thesis, a standard spiking neural network is trained using Tensorflow. The avalanches are shown by means of software. Next, the amount and timing of avalanches are modeled mathematically. Furthermore, the software model is attached to a SystemC model of the on-chip network. The performance impact of avalanches is quantified. Finally, an architectural optimization is proposed to overcome performance limitations.
- Train a spiking neural network (Google’s mobilenet v2) with tensorflow
- Model timing and behavior of avalanches
- Attach network model to a SystemC model of an on-chip communication network
- Quantify the avalanches’ performance impact
- Propose an architectural optimization to tackle limitations
- Interest in mashine learning algorithms
- Good knowledge of C++ and Python
- Interest in modelling