ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Mitigating Disparity while Maximizing Reward: Tight Anytime Guarantee for Improving Bandits

Patil, V and Nair, V and Ghalme, G and Khan, A (2023) Mitigating Disparity while Maximizing Reward: Tight Anytime Guarantee for Improving Bandits. In: International Joint Conference on Artificial Intelligence, IJCAI 2023, 19 - 25 August 2023, Macao, pp. 4100-4108.

[img] PDF
IJCAI2023_2023_4100-4108_2023.pdf - Published Version
Restricted to Registered users only

Download (255kB) | Request a copy
Official URL: https://www.ijcai.org/proceedings/2023/46

Abstract

We study the Improving Multi-Armed Bandit (IMAB) problem, where the reward obtained from an arm increases with the number of pulls it receives. This model provides an elegant abstraction for many real-world problems in domains such as education and employment, where decisions about the distribution of opportunities can affect the future capabilities of communities and the disparity between them. A decision-maker in such settings must consider the impact of her decisions on future rewards in addition to the standard objective of maximizing her cumulative reward at any time. We study the tension between two seemingly conflicting objectives in the horizon-unaware setting: a) maximizing the cumulative reward at any time, and b) ensuring that arms with better long-term rewards get sufficient pulls even if they initially have low rewards. We show that, surprisingly, the two objectives are aligned with each other. Our main contribution is an anytime algorithm for the IMAB problem that achieves the best possible cumulative reward while ensuring that the arms reach their true potential given sufficient time. Our algorithm mitigates the initial disparity due to lack of opportunity and continues pulling an arm until it stops improving. We prove the optimality of our algorithm by showing that a) any algorithm for the IMAB problem, no matter how utilitarian, must suffer Ω(T) policy regret and Ω(k) competitive ratio with respect to the optimal offline policy, and b) the competitive ratio of our algorithm is O(k). © 2023 International Joint Conferences on Artificial Intelligence. All rights reserved.

Item Type: Conference Paper
Publication: IJCAI International Joint Conference on Artificial Intelligence
Publisher: International Joint Conferences on Artificial Intelligence
Additional Information: The copyright for this article belongs to the International Joint Conferences on Artificial Intelligence.
Keywords: Artificial intelligence, Anytime algorithm; Competitive ratio; Conflicting objectives; Decision makers; Multiarmed bandit problems (MABP); Offline; Optimality; Real-world problem; True potentials, Decision making
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 24 Nov 2023 09:28
Last Modified: 24 Nov 2023 09:28
URI: https://eprints.iisc.ac.in/id/eprint/83229

Actions (login required)

View Item View Item