Institute for Communication Technologies and Embedded Systems

ExPAN(N)D: Exploring Posits for EfficientArtificial Neural Network Design inFPGA-based Systems

Authors:
Nambi, S. ,  Ullah, S. ,  Siva Satyendra, S. ,  Lohana, A. ,  Merchant, F. ,  Kumar, A.
Journal:
IEEE Access
Volume:
9
Date:
2021
DOI:
10.1109/ACCESS.2021.3098730
hsb:
RWTH-2021-08063
Language:
English
Abstract:
The high computational complexity, memory footprints, and energy requirements of machine learning models, such as Artificial Neural Networks (ANNs), hinder their deployment on resource-constrained embedded systems. Most state-of-the-art works have considered this problem by proposing various low bit-width data representation schemes and optimized arithmetic operators' implementations. To further elevate the implementation gains offered by these individual techniques, there is a need to cross-examine and combine these techniques' unique features. This paper presents ExPAN(N)D, a framework to analyze and ingather the efficacy of the Posit number representation scheme and the efficiency of fixed-point arithmetic implementations for ANNs. The Posit scheme offers a better dynamic range and higher precision for various applications than IEEE 754 single-precision floating-point format. However, due to the dynamic nature of the various fields of the Posit scheme, the corresponding arithmetic circuits have higher critical path delay and resource requirements than the single-precision-based arithmetic units. Towards this end, we propose a novel Posit to fixed-point converter for enabling high-performance and energy-efficient hardware implementations for ANNs with minimal drop in the output accuracy. We also propose a modified Posit-based representation to store the trained parameters of a network. With the proposed Posit to fixed-point converter-based designs, we provide multiple design points with varying accuracy-performance trade-offs for an ANN. For instance, compared to the lowest power dissipating Posit-only accelerator design, one of our proposed designs results in 80% and 48% reduction in power dissipation and LUT utilization respectively, with marginal increase in classification error for Imagenet dataset classification using VGG-16.
Download:
BibTeX