ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

On the Whittle Index for Restless Multiarmed Hidden Markov Bandits

Meshram, Rahul and Manjunath, D and Gopalan, Aditya (2018) On the Whittle Index for Restless Multiarmed Hidden Markov Bandits. In: IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63 (9). pp. 3046-3053.

[img] PDF
Ieee_Tra_Aut_Con_63-9_3046_2018.pdf - Published Version
Restricted to Registered users only

Download (411kB) | Request a copy
Official URL: http://dx.doi.org/10.1109/TAC.2018.2799521

Abstract

We consider a restless multiarmed bandit in which each arm can be in one of two states. When an arm is sampled, the state of the arm is not available to the sampler. Instead, a binary signal with a known randomness that depends on the state of the arm is available. No signal is available if the arm is not sampled. An arm-dependent reward is accrued from each sampling. In each time step, each arm changes state according to known transition probabilities, which, in turn, depend on whether the arm is sampled or not sampled. Since the state of the arm is never visible and has to be inferred from the current belief and a possible binary signal, we call this the hidden Markov bandit. Our interest is in a policy to select the arm(s) in each time step to maximize the infinite horizon discounted reward. Specifically, we seek the use of the Whittle index in selecting the arms. We first analyze the single-armed bandit and show that, in general, it admits an approximate threshold-type optimal policy when there is a positive reward for the ``no-sample'' action. We also identify several special cases for which the threshold policy is indeed the optimal policy. Next, we show that such a single-armed bandit also satisfies an approximate-indexability property. For the case when the single-armed bandit admits a threshold-type optimal policy, we perform the calculation of the Whittle index for each arm. Numerical examples illustrate the analytical results.

Item Type: Journal Article
Publication: IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Additional Information: Copy right for this article belong to IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
Keywords: partially observed Markov decision process; restless multiarmed bandit (RMAB); hidden Markov model multiarm bandits; indexability; Whittle index; opportunistic communication
Department/Centre: Division of Electrical Sciences > Electrical Communication Engineering
Date Deposited: 27 Sep 2018 14:48
Last Modified: 27 Sep 2018 14:48
URI: http://eprints.iisc.ac.in/id/eprint/60746

Actions (login required)

View Item View Item