Background
Machine learning, big-data and internet-of-things are some of the emerging computing applications which are extremely demanding in terms of storage, energy and performance. While conventional von Neumann architectures are facing significant challenges to cope with such demands, computing-in-memory architectures in general and neuro-inspired architectures, in particular, represent a promising solution to overcome these limitations. This Master Thesis aims to tackle the Software-Hardware interface of novel neuro-inspired accelerators by investigating the mapping and scheduling of state-of-the-art workloads onto memristor-based systolic areas.
Description
In the context of this work, we will invent a novel way of computation in neuromorphic devices. These problems are closely related to existing soltions for Google's TPU-style way of data processing. Their approaches must be translated into the neuromorphic world.
Tasks
Amoung others, these step must be done:
- Characterize neuromophic processing devices
- Implement neuromorphic extension for Tensorflow
- Formulate optimization problems for mapping of compute
- Propose a neuromorphic mapping and scheduling tool chain
- Evaluation and verification by means of simulation