PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Fairness Under Group-Conditional Prior Probability Shift: Invariance, Drift, and Target-Aware Post-Processing

ArXivSource

Amir Asiaee, Kaveh Aryan

cs.LG
|
Feb 5, 2026
156 views

One-line Summary

The paper addresses fairness in machine learning under group-conditional prior probability shift and introduces a method to maintain fairness when label prevalences change across demographic groups between training and deployment.

Plain-language Overview

Machine learning models are often trained on past data, but when they are used in the real world, conditions may change, particularly affecting different demographic groups unequally. This study focuses on how the likelihood of certain outcomes, like disease rates or loan defaults, can shift between groups over time. The authors explore how fairness criteria based on error rates can remain stable despite these shifts, while criteria based on acceptance rates may not. They propose a new method, TAP-GPPS, which adjusts the model's predictions to maintain fairness without needing new labeled data from the target environment.

Technical Details