Artificial Intelligence (AI) has become an important driver for innovation, as it is a nearly universally applicable method on all of computer science. As AI workloads are compute-demanding tasks, specialized hardware accelerators have gained traction leading to many products such as Google’s tensor processing unit (TPU). Neuromorphic computing is a novel approach using in-memory computing and post-CMOS technologies to improve AI accelerators. The promises are manyfold, e.g., reduced power consumption and improved latency. Hence, a plethora of research works on devices, architectures, systems, and security have been introduced. In this seminar, we cover techniques and architectures presented in the recent literature to build and improve AI accelerators using neuromorphic computing both for cloud and edge applications. We also dwell briefly on device-level limitations and their mitigation techniques from the physical and the system-integration perspective.
Related projects: NEUROTEC