Development of an auto-tuner for efficient neural-network mapping for neuromorphic systems


Ultra-low power, fully programmable neuromorphic computing is one of the important research topic in industry and academia. This work is conducted in cooperation with an international silicon company dedicated to neuromorphic computing for sensor analytics and machine learning – powered by brain-inspired technology. Their architecture is constructed as a massively parallel SoC (over 100 cores), optimized for machine-learning inference and data-flow applications.


Part of any processor technology is an SDK, whose task it is (amongst others) to map a standard neural-network application, developed with Tensorflow, to a SoC. Therefore, the layers of the neural network must be assigned to cores in the NoC mesh of a  massive-many-core system. The goal of this thesis is to develop an auto-tuner to improve GML’s mapper. The task of the mapper is to assign layers of a neural network to cores in the NoC mesh of a massive-many-core system. The auto-tuner receives a set of optimization parameters, which are considered for optimizing the mapping.


  • Rigorous exploration and benchmarking of selected mapping strategies for massively parallel neuromorphic computer systems
  • Implementation of the Mapper tool


  • Master student with electrical/computer engineering background
  • Available for a period of 9-12 months
  • Strong background in Computer Architecture or Advanced Digital System Design
  • Strong programming background (preferably with C++)
  • Strong background in advanced computer architectures and digital design
  • Understanding of neural networks and the underlying principles
  • “Can-do” mentality, excellent problem-solving capabilities, and the motivation to dive deep into a novel neuromorphic compute architecture