The huge resource footprint of AI applications has become one of its most urgent issues. On both sides of the spectrum, it is limiting innovation. Hyper-scaled AI models are using ever more CPUs and GPUs in datacenters which yields immense power costs. Tiny models deployed to the edge are limited in their capabilities from limited power budgets.
Neuromorphic accelerators in SoCs promise unprecedented energy efficiency and low latency for machine learning applications. Hence, their integration into many products in the mass market would be favorable.
This seminar explored recent developments in neuromorphic computing and AI processing. Students will learn about novel devices, system integration and software development for AI.