Merchant, Farhad and Chattopadhyay, Anupam and Raha, Soumyendu and Nandy, SK and Narayan, Ranjani (2017) Accelerating BLAS and LAPACK via Efficient Floating Point Architecture Design. In: Parallel Processing Letters, 27 (3-4). ISSN 0129-6264
|
PDF
par_pro_let_27_3-4_2017.pdf - Published Version Download (2MB) | Preview |
Abstract
Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) form basic building blocks for several High Performance Computing (HPC) applications and hence dictate performance of the HPC applications. Performance in such tuned packages is attained through tuning of several algorithmic and architectural parameters such as number of parallel operations in the Directed Acyclic Graph of the BLAS/LAPACK routines, sizes of the memories in the memory hierarchy of the underlying platform, bandwidth of the memory, and structure of the compute resources in the underlying platform. In this paper, we closely investigate the impact of the Floating Point Unit (FPU) micro-architecture for performance tuning of BLAS and LAPACK. We present theoretical analysis for pipeline depth of different floating point operations like multiplier, adder, square root, and divider followed by characterization of BLAS and LAPACK to determine several parameters required in the theoretical framework for deciding optimum pipeline depth of the floating operations. A simple design of a Processing Element (PE) is presented and shown that the PE outperforms the most recent custom realizations of BLAS and LAPACK by 1.1X to 1.5X in GFlops/W, and 1.9X to 2.1X in Gflops/mm2. Compared to multicore, General Purpose Graphics Processing Unit (GPGPU), Field Programmable Gate Array (FPGA), and ClearSpeed CSX700, performance improvement of 1.8-80x is reported in PE.
Item Type: | Journal Article |
---|---|
Publication: | Parallel Processing Letters |
Publisher: | World Scientific Publishing Co. Pte Ltd |
Additional Information: | The copyright for this article belongs to the Authors. |
Keywords: | floating point unit; high performance computing; instruction level parallelism; Parallel computing; power-performance trade-offs |
Department/Centre: | Division of Interdisciplinary Sciences > Computational and Data Sciences |
Date Deposited: | 14 Jun 2022 05:57 |
Last Modified: | 14 Jun 2022 05:57 |
URI: | https://eprints.iisc.ac.in/id/eprint/73459 |
Actions (login required)
View Item |