AI Ethics | 2021

A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild

 
 

Abstract


We seek to determine whether state-of-the-art, black box face processing technology can learn to make biased trait judgments from human first impression biases. Using features extracted with FaceNet, a widely used face recognition framework, we train a transfer learning model on human subjects first impressions of personality traits in other faces as measured by social psychologists. We measure the extent to which this appearance bias can be embedded in state-of-the-art face recognition models and benchmark learning performance for subjective perceptions of personality traits from faces. In particular, we find that features extracted with FaceNet can be used to predict human appearance biases for deliberately manipulated faces but not for randomly generated faces scored by humans. Additionally, in contrast to prior work in social psychology, the model does not find a significant signal correlating politicians vote shares with perceived competence bias. With Local Interpretable Model-Agnostic Explanations (LIME), we provide several explanations for this discrepancy. Our results suggest that some signals of appearance bias documented in social psychology are not embedded by the machine learning techniques we investigate.

Volume 1
Pages 249-260
DOI 10.1007/S43681-020-00035-Y
Language English
Journal AI Ethics

Full Text