ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Speeding up reinforcement learning-based information extraction training using asynchronous methods

Sharma, A and Parekh, Z and Talukdar, P (2017) Speeding up reinforcement learning-based information extraction training using asynchronous methods. In: 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, 9 - 11 September 2017, Copenhagen, pp. 2658-2663.

[img]
Preview
PDF
spe_Rei_lea_Inf_ext_2017.pdf - Published Version

Download (670kB) | Preview
Official URL: https://doi.org/10.18653/v1/d17-1281

Abstract

RLIE-DQN is a recently proposed Reinforcement Learning-based Information Extraction (IE) technique which is able to incorporate external evidence during the extraction process. RLIE-DQN trains a single agent sequentially, training on one instance at a time. This results in significant training slowdown which is undesirable. We leverage recent advances in parallel RL training using asynchronous methods and propose RLIE-A3C. RLIEA3C trains multiple agents in parallel and is able to achieve upto 6x training speedup over RLIE-DQN, while suffering no loss in average accuracy.

Item Type: Conference Paper
Publication: EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings
Publisher: Association for Computational Linguistics (ACL)
Additional Information: The copyright for this article belongs to Association for Computational Linguistics (ACL)
Keywords: Information retrieval; Machine learning; Multi agent systems; Natural language processing systems, Asynchronous methods; Extraction process; Information extraction techniques; Multiple agents; Single-agent, Reinforcement learning
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 19 Jul 2022 11:53
Last Modified: 19 Jul 2022 11:53
URI: https://eprints.iisc.ac.in/id/eprint/74905

Actions (login required)

View Item View Item