ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Actor-Critic Algorithms with Online Feature Adaptation

Prabuchandran, KJ and Bhatnagar, Shalabh and Borkar, VS (2016) Actor-Critic Algorithms with Online Feature Adaptation. In: ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 26 (4).

[img] PDF
ACM_Tra_Mod_Com_Sim_26-4_24_2016.pdf - Published Version
Restricted to Registered users only

Download (471kB) | Request a copy
Official URL: http://dx.doi.org/10.1145/2868723


We develop two new online actor-critic control algorithms with adaptive feature tuning for Markov Decision Processes (MDPs). One of our algorithms is proposed for the long-run average cost objective, while the other works for discounted cost MDPs. Our actor-critic architecture incorporates parameterization both in the policy and the value function. A gradient search in the policy parameters is performed to improve the performance of the actor. The computation of the aforementioned gradient, however, requires an estimate of the value function of the policy corresponding to the current actor parameter. The value function, on the other hand, is approximated using linear function approximation and obtained from the critic. The error in approximation of the value function, however, results in suboptimal policies. In our article, we also update the features by performing a gradient descent on the Grassmannian of features to minimize a mean square Bellman error objective in order to find the best features. The aim is to obtain a good approximation of the value function and thereby ensure convergence of the actor to locally optimal policies. In order to estimate the gradient of the objective in the case of the average cost criterion, we utilize the policy gradient theorem, while in the case of the discounted cost objective, we utilize the simultaneous perturbation stochastic approximation (SPSA) scheme. We prove that our actor-critic algorithms converge to locally optimal policies. Experiments on two different settings show performance improvements resulting from our feature adaptation scheme.

Item Type: Journal Article
Additional Information: Copy right for this article belongs to the ASSOC COMPUTING MACHINERY, 2 PENN PLAZA, STE 701, NEW YORK, NY 10121-0701 USA
Keywords: Markov decision processes; actor-critic algorithms; function approximation; feature adaptation; online learning; residual gradient scheme; temporal difference learning; stochastic approximation; Grassmann manifold; SPSA; policy gradients
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 22 Jun 2016 07:11
Last Modified: 27 Feb 2019 10:19
URI: http://eprints.iisc.ac.in/id/eprint/54052

Actions (login required)

View Item View Item