ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

IR2Vec: LLVM IR Based Scalable Program Embeddings

Venkatakeerthy, S and Aggarwal, R and Jain, S and Desarkar, MS and Upadrasta, R and Srikant, YN (2020) IR2Vec: LLVM IR Based Scalable Program Embeddings. In: ACM Transactions on Architecture and Code Optimization, 17 (4).

[img]
Preview
PDF
ACM-Tra-Arc-Cod-Opt.pdf - Published Version

Download (3MB) | Preview
Official URL: https://dx.doi.org/10.1145/3418463

Abstract

We propose IR2VEC, a Concise and Scalable encoding infrastructure to represent programs as a distributed embedding in continuous space. This distributed embedding is obtained by combining representation learning methods with flow information to capture the syntax as well as the semantics of the input programs. As our infrastructure is based on the Intermediate Representation (IR) of the source code, obtained embeddings are both language and machine independent. The entities of the IR are modeled as relationships, and their representations are learned to form a seed embedding vocabulary. Using this infrastructure, we propose two incremental encodings: Symbolic and Flow-Aware. Symbolic encodings are obtained from the seed embedding vocabulary, and Flow-Aware encodings are obtained by augmenting the Symbolic encodings with the flow information. We show the effectiveness of our methodology on two optimization tasks (Heterogeneous device mapping and Thread coarsening). Our way of representing the programs enables us to use non-sequential models resulting in orders of magnitude of faster training time. Both the encodings generated by IR2VEC outperform the existing methods in both the tasks, even while using simple machine learning models. In particular, our results improve or match the state-of-the-art speedup in 11/14 benchmark-suites in the device mapping task across two platforms and 53/68 benchmarks in the thread coarsening task across four different platforms. When compared to the other methods, our embeddings are more scalable, is non-data-hungry, and has better Out-Of-Vocabulary (OOV) characteristics. © 2020 ACM.

Item Type: Journal Article
Publication: ACM Transactions on Architecture and Code Optimization
Publisher: Association for Computing Machinery
Additional Information: Copyright to this article belongs to Association for Computing Machinery
Keywords: Coarsening; Embeddings; Encoding (symbols); Machinery; Mapping; Ostwald ripening; Semantics, Continuous spaces; Flow informations; Heterogeneous devices; Intermediate representations; Machine learning models; Optimization task; Orders of magnitude; Scalable encoding, Learning systems
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 22 Jan 2021 09:53
Last Modified: 22 Jan 2021 09:53
URI: http://eprints.iisc.ac.in/id/eprint/67715

Actions (login required)

View Item View Item