Bhatnagar, Shalabh and Sutton, Richard S and Ghavamzadeh, Mohammad and Lee, Mark (2007) Incremental natural-gradient actor-critic algorithms. In: Proceedings of 21st Annual Conference on Neural Information Processing Systems (NIPS-2007), Vancouver, Canada,, Dec. 2007, Vancouver, Canada.
PDF
10.1.1.129.780.pdf - Published Version Restricted to Registered users only Download (135kB) | Request a copy |
Abstract
We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic rein- forcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their com- patibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further re- duce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal differ- ence learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.
Item Type: | Conference Paper |
---|---|
Department/Centre: | Division of Electrical Sciences > Computer Science & Automation |
Date Deposited: | 17 Oct 2011 06:48 |
Last Modified: | 17 Oct 2011 06:48 |
URI: | http://eprints.iisc.ac.in/id/eprint/41475 |
Actions (login required)
View Item |