John, I and Bhatnagar, S (2020) Deep Reinforcement Learning with Successive Over-Relaxation and its Application in Autoscaling Cloud Resources. In: Proceedings of the International Joint Conference on Neural Networks, 19-24 July 2020, Virtual, Glasgow.
PDF
IJCNN_2020.pdf - Published Version Restricted to Registered users only Download (865kB) | Request a copy |
Abstract
We present a new deep reinforcement learning algorithm using the technique of successive over-relaxation (SOR) in Deep Q-networks (DQNs). The new algorithm, named SOR-DQN, uses modified targets in the DQN framework with the aim of accelerating training. This work is motivated by the problem of auto-scaling resources for cloud applications, for which existing algorithms suffer from issues such as slow convergence, poor performance during the training phase and non-scalability. For the above problem, SOR-DQN achieves significant improvements over DQN on both synthetic and real datasets. We also study the generalization ability of the algorithm to multiple tasks by using it to train agents playing Atari video games. © 2020 IEEE.
Item Type: | Conference Paper |
---|---|
Publication: | Proceedings of the International Joint Conference on Neural Networks |
Publisher: | Institute of Electrical and Electronics Engineers Inc. |
Additional Information: | The copyright for this article belongs to Institute of Electrical and Electronics Engineers Inc. |
Keywords: | Learning algorithms; Neural networks; Reinforcement learning, Cloud applications; Generalization ability; ITS applications; Multiple tasks; Poor performance; Real data sets; Slow convergences; Successive over relaxation, Deep learning |
Department/Centre: | Division of Electrical Sciences > Computer Science & Automation |
Date Deposited: | 03 Feb 2023 05:17 |
Last Modified: | 03 Feb 2023 05:17 |
URI: | https://eprints.iisc.ac.in/id/eprint/79833 |
Actions (login required)
View Item |