ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

An online prediction algorithm for reinforcement learning with linear function approximation using cross entropy method

Joseph, Ajin George and Bhatnagar, Shalabh (2018) An online prediction algorithm for reinforcement learning with linear function approximation using cross entropy method. In: MACHINE LEARNING, 107 (8-10, ). pp. 1385-1429.

[img] PDF
Mac_Ler_107-8_1385_2018.pdf - Published Version
Restricted to Registered users only

Download (2MB) | Request a copy
Official URL: https://dx.doi.org/10.1007/s10994-018-5727-z

Abstract

In this paper, we provide two new stable online algorithms for the problem of prediction in reinforcement learning, i.e., estimating the value function of a model-free Markov reward process using the linear function approximation architecture and with memory and computation costs scaling quadratically in the size of the feature set. The algorithms employ the multi-timescale stochastic approximation variant of the very popular cross entropy optimization method which is a model based search method to find the global optimum of a real-valued function. A proof of convergence of the algorithms using the ODE method is provided. We supplement our theoretical results with experimental comparisons. The algorithms achieve good performance fairly consistently on many RL benchmark problems with regards to computational efficiency, accuracy and stability.

Item Type: Journal Article
Additional Information: Copyright of this article belong to SPRINGER, VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Depositing User: Id for Latest eprints
Date Deposited: 20 Aug 2018 15:36
Last Modified: 20 Aug 2018 15:36
URI: http://eprints.iisc.ac.in/id/eprint/60463

Actions (login required)

View Item View Item