ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

An Incremental Fast Policy Search Using a Single Sample Path

Joseph, Ajin George and Bhatnagar, Shalabh (2017) An Incremental Fast Policy Search Using a Single Sample Path. In: PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2017, DEC 05-08, 2017, Kolkata, INDIA, pp. 3-10.

Full text not available from this repository.
Official URL: https://doi.org/10.1007/978-3-319-69900-4_1

Abstract

In this paper, we consider the control problem in a reinforcement learning setting with large state and action spaces. The control problem most commonly addressed in the contemporary literature is to find an optimal policy which optimizes the long run gamma-discounted transition costs, where gamma is an element of 0, 1). They also assume access to a generative model/simulator of the underlying MDP with the hidden premise that realization of the system dynamics of the MDP for arbitrary policies in the form of sample paths can be obtained with ease from the model. In this paper, we consider a cost function which is the expectation of a approximate value function w.r.t. the steady state distribution of the Markov chain induced by the policy, without having access to the generative model. We assume that a single sample path generated using a priori chosen behaviour policy is made available. In this information restricted setting, we solve the generalized control problem using the incremental cross entropy method. The proposed algorithm is shown to converge to the solution which is globally optimal relative to the behaviour policy.

Item Type: Conference Proceedings
Series.: Lecture Notes in Computer Science
Publisher: SPRINGER INTERNATIONAL PUBLISHING AG
Additional Information: Copyright to this article belongs to IEEE.
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 12 Jan 2019 15:47
Last Modified: 12 Jan 2019 15:52
URI: http://eprints.iisc.ac.in/id/eprint/61269

Actions (login required)

View Item View Item