ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Misspecified linear bandits

Ghosh, A and Chowdhury, SR and Gopalan, A (2017) Misspecified linear bandits. In: 31st AAAI Conference on Artificial Intelligence, AAAI 2017, 4-10 February 2017, San Francisco, pp. 3761-3767.

[img] PDF
AAAI_2017.pdf - Published Version
Restricted to Registered users only

Download (886kB) | Request a copy
Official URL: https://www.aaai.org/ocs/index.php/AAAI/AAAI17/pap...

Abstract

We consider the problem of online learning in misspecified linear stochastic multi-armed bandit problems. Regret guarantees for state-of-the-art linear bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit (OFUL) hold under the assumption that the arms expected rewards are perfectly linear in their features. It is, however, of interest to investigate the impact of potential misspecification in linear bandit models, where the expected rewards are perturbed away from the linear subspace determined by the arms features. Although OFUL has recently been shown to be robust to relatively small deviations from linearity, we show that any linear bandit algorithm that enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL) must suffer linear regret under a sparse additive perturbation of the linear model. In an attempt to overcome this negative result, we define a natural class of bandit models characterized by a non-sparse deviation from linearity. We argue that the OFUL algorithm can fail to achieve sublinear regret even under models that have non-sparse deviation. We finally develop a novel bandit algorithm, comprising a hypothesis test for linearity followed by a decision to use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly linear bandit models, the algorithm provably exhibits OFULs favorable regret performance, while for misspecified models satisfying the non-sparse deviation property, the algorithm avoids the linear regret phenomenon and falls back on UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on recommendation data from the public Yahoo! Learning to Rank Challenge dataset, empirically support our findings. © Copyright 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Item Type: Conference Paper
Publication: 31st AAAI Conference on Artificial Intelligence, AAAI 2017
Publisher: AAAI press
Additional Information: The copyright for this article belongs to
Keywords: Stochastic systems, Hypothesis tests; Learning to rank; Misspecification; Misspecified models; Multi-armed bandit problem; Numerical experiments; State of the art; Upper confidence bound, Artificial intelligence
Department/Centre: Division of Electrical Sciences > Electrical Communication Engineering
Date Deposited: 26 Aug 2022 06:05
Last Modified: 26 Aug 2022 06:05
URI: https://eprints.iisc.ac.in/id/eprint/74739

Actions (login required)

View Item View Item