Abstract
The widespread use of machine learning algorithms in data driven decision-making systems has become increasingly popular. Recent studies have raised concerns that this increasing popularity has exacerbated issues of unfairness and discrimination toward individuals. Researchers in this field have proposed a wide variety of fairness-enhanced classifiers and fairness matrices to address these issues, but very few fairness techniques have been translated into the real-world practice of data-driven decisions. This work focuses on individual fairness, where similar individuals need to be treated similarly based on the similarity
of tasks. In this paper, we propose a novel model of individual fairness
that transforms features into high-level representations that conform to
the individual fairness and accuracy of the learning algorithms. The proposed model produces equally deserving pairs of individuals who are distinguished from other pairs in the records by data-driven similarity measures between each individual in the transformed data. Such a design identifies the bias and mitigates it at the data preprocessing stage of the machine learning pipeline to ensure individual fairness. Our method is evaluated on three real-world datasets to demonstrate its effectiveness: the credit card approval dataset, the adult census dataset, and the recidivism dataset.
of tasks. In this paper, we propose a novel model of individual fairness
that transforms features into high-level representations that conform to
the individual fairness and accuracy of the learning algorithms. The proposed model produces equally deserving pairs of individuals who are distinguished from other pairs in the records by data-driven similarity measures between each individual in the transformed data. Such a design identifies the bias and mitigates it at the data preprocessing stage of the machine learning pipeline to ensure individual fairness. Our method is evaluated on three real-world datasets to demonstrate its effectiveness: the credit card approval dataset, the adult census dataset, and the recidivism dataset.
Original language | English |
---|---|
Publication status | Accepted/In press - 5 Aug 2022 |
Event | Artificial Neural Networks in Pattern Recognition 2022 - Heriot-Watt Dubai Campus (In-person and Online), Dubai, UNITED ARAB EMIRATES Duration: 24 Nov 2022 → 26 Nov 2022 https://annpr2022.com/ |
Workshop
Workshop | Artificial Neural Networks in Pattern Recognition 2022 |
---|---|
Abbreviated title | ANNPR2022 |
Country/Territory | UNITED ARAB EMIRATES |
City | Dubai |
Period | 24/11/22 → 26/11/22 |
Internet address |
Keywords
- Algorithmic bias
- Algorithmic fairness
- Fairness-aware machine learning
- fairness in machine learning
- Individual fairness