ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Scratchpad Sharing in GPUs

Jatala, Vishwesh and Anantpur, Jayvant and Karkare, Amey (2017) Scratchpad Sharing in GPUs. In: ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 14 (2).

[img] PDF
ACM_Tra_Arc_Cod_Opt_14-2_2017.pdf - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy
Official URL: http://dx.doi.org/10.1145/3075619

Abstract

General-Purpose Graphics Processing Unit (GPGPU) applications exploit on-chip scratchpad memory available in the Graphics Processing Units (GPUs) to improve performance. The amount of thread level parallelism (TLP) present in the GPU is limited by the number of resident threads, which in turn depends on the availability of scratchpad memory in its streaming multiprocessor (SM). Since the scratchpad memory is allocated at thread block granularity, part of the memory may remain unutilized. In this article, we propose architectural and compiler optimizations to improve the scratchpad memory utilization. Our approach, called Scratchpad Sharing, addresses scratchpad under-utilization by launching additional thread blocks in each SM. These thread blocks use unutilized scratchpad memory and also share scratchpad memory with other resident blocks. To improve the performance of scratchpad sharing, we propose Owner Warp First (OWF) scheduling that schedules warps from the additional thread blocks effectively. The performance of this approach, however, is limited by the availability of the part of scratchpad memory that is shared among thread blocks. We propose compiler optimizations to improve the availability of shared scratchpad memory. We describe an allocation scheme that helps in allocating scratchpad variables such that shared scratchpad is accessed for short duration. We introduce a new hardware instruction, relssp, that when executed releases the shared scratchpad memory. Finally, we describe an analysis for optimal placement of relssp instructions, such that shared scratchpad memory is released as early as possible, but only after its last use, along every execution path. We implemented the hardware changes required for scratchpad sharing and the relssp instruction using the GPGPU-Sim simulator and implemented the compiler optimizations in Ocelot framework. We evaluated the effectiveness of our approach on 19 kernels from 3 benchmarks suites: CUDA-SDK, GPGPU-Sim, and Rodinia. The kernels that under-utilize scratchpad memory show an average improvement of 19% and maximum improvement of 92.17% in terms of the number of instruction executed per cycle when compared to the baseline approach, without affecting the performance of the kernels that are not limited by scratchpad memory.

Item Type: Journal Article
Publication: ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION
Additional Information: Copy right for this article belongs to the ASSOC COMPUTING MACHINERY, 2 PENN PLAZA, STE 701, NEW YORK, NY 10121-0701 USA
Department/Centre: Division of Interdisciplinary Sciences > Supercomputer Education & Research Centre
Date Deposited: 23 Dec 2017 09:00
Last Modified: 23 Dec 2017 09:00
URI: http://eprints.iisc.ac.in/id/eprint/58485

Actions (login required)

View Item View Item