ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Stability of Stochastic Approximations With ``Controlled Markov'' Noise and Temporal Difference Learning

Ramaswamy, Arunselvan and Bhatnagar, Shalabh (2019) Stability of Stochastic Approximations With ``Controlled Markov'' Noise and Temporal Difference Learning. In: IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 64 (6). pp. 2614-2620.

[img] PDF
Iee_Tra_Aut_Con_64_6_2614-2620_2019.pdf - Published Version
Restricted to Registered users only

Download (329kB) | Request a copy
Official URL: https://doi.org/10.1109/TAC.2018.2874687

Abstract

We are interested in understanding stability (almost sure boundedness) of stochastic approximation algorithms (SAs) driven by a ``controlled Markov'' process. Analyzing this class of algorithms is important, since many reinforcement learning (RL) algorithms can be cast as SAs driven by a ``controlled Markov'' process. In this paper, we present easily verifiable sufficient conditions for stability and convergence of SAs driven by a ``controlled Markov'' process. Many RL applications involve continuous state spaces. While our analysis readily ensures stability for such continuous state applications, traditional analyses do not. As compared to literature, our analysis presents a two-fold generalization: 1) the Markov process may evolve in a continuous state space and 2) the process need not be ergodic under any given stationary policy. Temporal difference (TD) learning is an important policy evaluation method in RL. The theory developed herein, is used to analyze generalized TD(0), an important variant of TD. Our theory is also used to analyze a TD formulation of supervised learning for forecasting problems.

Item Type: Journal Article
Publication: IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Additional Information: copyright for this article belongs to IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Keywords: ``Controlled Markov'' noise; convergence; reinforcement learning (RL); stability; stochastic approximation algorithms; supervised learning; TD(0); temporal difference (TD) learning
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Division of Interdisciplinary Sciences > Robert Bosch Centre for Cyber Physical Systems
Date Deposited: 27 Jun 2019 14:40
Last Modified: 27 Jun 2019 14:42
URI: http://eprints.iisc.ac.in/id/eprint/63036

Actions (login required)

View Item View Item