- IEEE Design & Test
- accepted for publication
Machine learning applications are characterised by the extreme amount of data movements necessary for fast training or classification, such as real-time image and speech recognition. Simultaneously, modern computing systems are predominantly based on the traditional von Neumann architecture, which involves separating the computing and memory unit, thus, connected through a data bus. This so called "von-Neumann bottleneck" limits the bandwidth and increases power consumption, which stands in stark contrast to the demands of machine learning applications. Neuromorphic computing represents a promising solution to overcome these limitations by using emerging non-volatile memory technologies. This computing paradigm overcomes the von Neumann bottleneck by shifting the computations to the memory unit, mimicking the biological brain (e.g., mammalian brain) in its capability to process a massive amount of information in a parallel fashion. Nevertheless, neuromorphic computing raises numerous open research questions and challenges. This work provides a comprehensive survey of neuromorphic computing focusing on three essential aspects. First, the latest neuromorphic system architectures are described, and the proposed enhancements over time are highlighted. Following, simulation platforms that are fundamental to investigate this new computing paradigm are reviewed from four different abstraction levels: system, architecture, circuit, and device level. In addition, hardware security threats are presented on already existing work in the CMOS domain to learn the lessons and identify the threats in neuromorphic platforms.