Amir Asiaee, Kaveh Aryan
The paper addresses fairness in machine learning under group-conditional prior probability shift and introduces a method to maintain fairness when label prevalences change across demographic groups between training and deployment.
Machine learning models are often trained on past data, but when they are used in the real world, conditions may change, particularly affecting different demographic groups unequally. This study focuses on how the likelihood of certain outcomes, like disease rates or loan defaults, can shift between groups over time. The authors explore how fairness criteria based on error rates can remain stable despite these shifts, while criteria based on acceptance rates may not. They propose a new method, TAP-GPPS, which adjusts the model's predictions to maintain fairness without needing new labeled data from the target environment.