ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Learning dynamic prices in multiseller electronic retail markets with price sensitive customers, stochastic demands, and inventory replenishments

Chinthalapati, VLR and Yadati, Narahari and Karumanchi, R (2006) Learning dynamic prices in multiseller electronic retail markets with price sensitive customers, stochastic demands, and inventory replenishments. In: IEEE Transactions on Systems Man And Cybernetics Part C-Applications And Reviews, 36 (1). pp. 92-106.

[img] PDF
01603740.pdf - Published Version
Restricted to Registered users only

Download (314kB) | Request a copy
Official URL: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumb...

Abstract

In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.

Item Type: Journal Article
Publication: IEEE Transactions on Systems Man And Cybernetics Part C-Applications And Reviews
Publisher: Institute of Electrical and Electronics Engineers
Additional Information: Copyright 2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Keywords: dynamic pricing; inventory replenishments; Markovian game; multi-agent learning; online retail markets; price sensitive customers; reinforcement learning (RL); stochastic demands.
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 16 Sep 2010 05:05
Last Modified: 15 Jan 2013 05:44
URI: http://eprints.iisc.ac.in/id/eprint/31747

Actions (login required)

View Item View Item