ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Optimizing the linear fascicle evaluation algorithm for many-core systems

Aggarwal, K and Bondhugula, U (2019) Optimizing the linear fascicle evaluation algorithm for many-core systems. In: 33rd ACM International Conference on Supercomputing, ICS 2019, 26 June 2019, Phoenix, pp. 425-437.

[img] PDF
ACM_tra_7-4_2020.pdf - Published Version
Restricted to Registered users only

Download (2MB)
Official URL: https://doi.org/10.1145/3330345.3332469


Sparse matrix-vector multiplication (SpMV) operations are commonly used in various scientific and engineering applications. The performance of the SpMV operation often depends on exploiting regularity patterns in the matrix. Various representations and optimization techniques have been proposed to minimize the memory bandwidth bottleneck arising from the irregular memory access pattern involved. Among recent representation techniques, tensor decomposition is a popular one used for very large but sparse matrices. Post sparse-tensor decomposition, the new representation involves indirect accesses, making it more challenging to optimize for massive parallelism, such as on GPUs. Computational neuroscience algorithms often involve sparse datasets while still performing long running computations on them. The Linear Fascicle Evaluation (LiFE) application is a popular neuroscience algorithm used for pruning brain connectivity graphs. The datasets employed herein involve the Sparse Tucker Decomposition (STD) - - a widely used tensor decomposition method. Using this decomposition leads to multiple irregular array references, making it very difficult to optimize for GPUs. Recent implementations of the LiFE algorithm show that its SpMV operations are the key bottleneck for performance and scaling. In this paper, we first propose data restructuring techniques to minimize the effects of irregular accesses. We then propose various optimizations to optimally map threads at the granularity of warps, thread blocks and grid, and methods to partition the computation among thread blocks to obtain fine-grained parallelism and data reuse. Our optimized GPU implementation achieves a speedup of 5.2× over a reference optimized GPU code version on NVIDIA's GeForce RTX 2080 Ti GPU, and a speedup of 9.7× over a highly optimized and parallelized CPU implementation running on a 16-core Intel Xeon Silver (Skylake-based) system.

Item Type: Conference Paper
Publication: Proceedings of the International Conference on Supercomputing
Publisher: Association for Computing Machinery
Additional Information: The copyright for this article belongs to Association for Computing Machinery
Keywords: Graphics processing unit; Intelligent control; Neurology; Program processors; Tensors; Weaving, connectome; dMRI; Indirect array access; SBBNNLS; SpMV; Tractography; Tucker decompositions, Matrix algebra
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 23 Dec 2022 05:12
Last Modified: 23 Dec 2022 05:12
URI: https://eprints.iisc.ac.in/id/eprint/78516

Actions (login required)

View Item View Item