Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency | 2021

Leave-one-out Unfairness

 
 

Abstract


We introduce leave-one-out unfairness, which characterizes how likely a model s prediction for an individual will change due to the inclusion or removal of a single other person in the model s training data. Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person s inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point s prediction outcome over unit changes to the training data, rather than the error of the model in aggregate. Beyond formalizing leave-one-out unfairness, we characterize the extent to which deep models behave leave-one-out unfairly on real data, including in cases where the generalization error is small. Further, we demonstrate that adversarial training and randomized smoothing techniques have opposite effects on leave-one-out fairness, which sheds light on the relationships between robustness, memorization, individual fairness, and leave-one-out fairness in deep models. Finally, we discuss salient practical applications that may be negatively affected by leave-one-out unfairness.

Volume None
Pages None
DOI 10.1145/3442188.3445894
Language English
Journal Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency

Full Text