ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Rawlsian Fair Adaptation of Deep Learning Classifiers

Shah, K and Gupta, P and Deshpande, A and Bhattacharyya, C (2021) Rawlsian Fair Adaptation of Deep Learning Classifiers. In: 4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021, 19-21 May 2021, pp. 936-945.

[img] PDF
AIES_936-945_2021 - Published Version

Download (1MB)
Official URL: https://doi.org/10.1145/3461702.3462592

Abstract

Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender. Equality or near-equality constraints in group-fairness often worsen not only the aggregate utility but also the utility for the least advantaged sub-population. In this paper, we apply the principles of Pareto-efficiency and least-difference to the utility being accuracy, as an illustrative example, and arrive at the Rawls classifier that minimizes the error rate on the worst-off sensitive sub-population. Our mathematical characterization shows that the Rawls classifier uniformly applies a threshold to an ideal score of features, in the spirit of fair equality of opportunity. In practice, such a score or a feature representation is often computed by a black-box model that has been useful but unfair. Our second contribution is practical Rawlsian fair adaptation of any given black-box deep learning model, without changing the score or feature representation it computes. Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class. Our technical contribution is to formulate the above problems using ambiguous chance constraints, and to provide efficient algorithms for Rawlsian fair adaptation, along with provable upper bounds on the Rawls error rate. Our empirical results show significant improvement over state-of-the-art group-fair algorithms, even without retraining for fairness. © 2021 ACM.

Item Type: Conference Paper
Publication: AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
Publisher: Association for Computing Machinery, Inc
Additional Information: The copyright for this article belongs to Authors
Keywords: Error statistics; Errors; Pareto principle; Philosophical aspects; Population statistics, Equality constraints; Feature representation; Learning classifiers; Linear-threshold classifiers; Mathematical characterization; Second order statistics; Technical contribution; Threshold classifiers, Deep learning
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 20 Nov 2021 11:31
Last Modified: 20 Nov 2021 11:31
URI: http://eprints.iisc.ac.in/id/eprint/69856

Actions (login required)

View Item View Item