Chinta, Punya Murthy and Balamurugan, P and Shevade, Shirish and Murty, Narasimha M (2013) Optimizing F-Measure with Non-Convex Loss and Sparse Linear Classifiers. In: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), AUG 04-09, 2013, Dallas, TX.
PDF
Int_Joi_Con_2013.pdf - Published Version Restricted to Registered users only Download (238kB) | Request a copy |
Abstract
F-measure is a popular performance metric used in classification when the dataset is unbalanced. Optimizing this measure directly is often challenging since no closed form solution exists. Current algorithms use approximations to the F-measure and design classifiers using maximum margin or logistic regression framework. These algorithms are not scalable and the classifiers designed are not robust to outliers. In this work, we propose a general framework for approximate Fmeasure maximization. We also propose a non-convex loss function which is robust to outliers. Use of elastic net regularizer in the problem formulation enables us to do simultaneous classifier design and feature selection. We present an efficient algorithm to solve the proposed problem formulation. The proposed algorithm is simple and is easy to implement. Numerical experiments on real-world benchmark datasets demonstrate that the proposed algorithm is fast and gives better generalization performance compared to some existing approaches. Thus, it is a powerful alternative for optimizing F-measure and designing a sparse classifier.
Item Type: | Conference Proceedings |
---|---|
Series.: | IEEE International Joint Conference on Neural Networks (IJCNN) |
Publisher: | IEEE |
Additional Information: | Copy right for this article belongs to the IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA |
Department/Centre: | Division of Electrical Sciences > Computer Science & Automation |
Date Deposited: | 19 Aug 2016 09:45 |
Last Modified: | 19 Aug 2016 09:45 |
URI: | http://eprints.iisc.ac.in/id/eprint/54305 |
Actions (login required)
View Item |