ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Surrogate Regret Bounds for Bipartite Ranking via Strongly Proper Losses

Agarwal, Shivani (2014) Surrogate Regret Bounds for Bipartite Ranking via Strongly Proper Losses. In: JOURNAL OF MACHINE LEARNING RESEARCH, 15 . pp. 1653-1674.

[img] PDF
jou_mac_lea_res_15-_1674_2014.pdf - Published Version
Restricted to Registered users only

Download (249kB) | Request a copy
Official URL: http://arxiv.org/pdf/1207.0268v1.pdf

Abstract

The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).

Item Type: Journal Article
Publication: JOURNAL OF MACHINE LEARNING RESEARCH
Publisher: MICROTOME PUBL
Additional Information: Copyright for this article belongs to the MICROTOME PUBL, 31 GIBBS ST, BROOKLINE, MA 02446 USA
Keywords: bipartite ranking; area under ROC curve (AUC); statistical consistency; regret bounds; proper losses; strongly proper losses
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 20 Dec 2014 06:26
Last Modified: 20 Dec 2014 06:26
URI: http://eprints.iisc.ac.in/id/eprint/50493

Actions (login required)

View Item View Item