ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Gradient Temporal Difference with Momentum: Stability and Convergence

Deb, R and Bhatnagar, S (2022) Gradient Temporal Difference with Momentum: Stability and Convergence. In: 36th AAAI Conference on Artificial Intelligence, AAAI 2022, 22 February - 1 March 2022, Virtual, Online, pp. 6488-6496.

[img] PDF
AAAI_2022.pdf - Published Version
Restricted to Registered users only

Download (793kB) | Request a copy
Official URL: https://doi.org/10.1609/aaai.v36i6.20601

Abstract

Gradient temporal difference (Gradient TD) algorithms are a popular class of stochastic approximation (SA) algorithms used for policy evaluation in reinforcement learning. Here, we consider Gradient TD algorithms with an additional heavy ball momentum term and provide choice of step size and momentum parameter that ensures almost sure convergence of these algorithms asymptotically. In doing so, we decompose the heavy ball Gradient TD iterates into three separate iterates with different step sizes. We first analyze these iterates under one-timescale SA setting using results from current literature. However, the one-timescale case is restrictive and a more general analysis can be provided by looking at a three-timescale decomposition of the iterates. In the process we provide the first conditions for stability and convergence of general three-timescale SA. We then prove that the heavy ball Gradient TD algorithm is convergent using our three-timescale SA analysis. Finally, we evaluate these algorithms on standard RL problems and report improvement in performance over the vanilla algorithms. Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Item Type: Conference Paper
Publication: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
Publisher: Association for the Advancement of Artificial Intelligence
Additional Information: The copyright for this article belongs to Association for the Advancement of Artificial Intelligence.
Keywords: Approximation algorithms; Approximation theory; Reinforcement learning; Stochastic systems, Policy evaluation; Reinforcement learnings; Stability and convergence; Step size; Stochastic approximation algorithms; Stochastic approximations; Temporal differences; Temporal-difference algorithm; Three time-scales; Time-scales, Momentum
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 21 Feb 2023 04:53
Last Modified: 21 Feb 2023 04:53
URI: https://eprints.iisc.ac.in/id/eprint/80574

Actions (login required)

View Item View Item