Institute for Communication Technologies and Embedded Systems

Accelerating BLAS and LAPACK via Efficient Floating Point Architecture Design

Authors:
Merchant, F. ,  Chattopadhyay, A. ,  Raha, S. ,  Nandy, S. K. ,  Narayan, R.
Journal:
Parallel Processing Letters
Volume:
27
Publisher:
World Scientific
Page(s):
1--17
number:
03n04
Date:
Dec. 2017
DOI:
10.1142/S0129626417500062
hsb:
RWTH-2018-221464
Language:
English
Abstract:
Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) form basic building blocks for several High Performance Computing (HPC) applications and hence dictate performance of the HPC applications. Performance in such tuned packages is attained through tuning of several algorithmic and architectural parameters such as number of parallel operations in the Directed Acyclic Graph of the BLAS/LAPACK routines, sizes of the memories in the memory hierarchy of the underlying platform, bandwidth of the memory, and structure of the compute resources in the underlying platform. In this paper, we closely investigate the impact of the Floating Point Unit (FPU) micro-architecture for performance tuning of BLAS and LAPACK. We present theoretical analysis for pipeline depth of different floating point operations like multiplier, adder, square root, and divider followed by characterization of BLAS and LAPACK to determine several parameters required in the theoretical framework for deciding optimum pipeline depth of the floating operations. A simple design of a Processing Element (PE) is presented and shown that the PE outperforms the most recent custom realizations of BLAS and LAPACK by 1.1X to 1.5X in GFlops/W, and 1.9X to 2.1X in Gflops/mm2. Compared to multicore, General Purpose Graphics Processing Unit (GPGPU), Field Programmable Gate Array (FPGA), and ClearSpeed CSX700, performance improvement of 1.8-80x is reported in PE.
Read More: http://www.worldscientific.com/doi/abs/10.1142/S0129626417500062
Download:
BibTeX