Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery | 2021

Discover Discriminatory Bias in High Accuracy Models Embedded in Machine Learning Algorithms



For all the excitement about Machine Learning Algorithms, there are serious impediments to its widespread adoption. Accuracy is not the only criteria in measuring the model performance, in real life, current model assessment techniques, like cross-validation or receiver operator characteristic (ROC) and lift curves, simply don’t tell us about all the nasty things that can happen such as Opaqueness, Social discrimination etc. within the models. And that’s why model debugging, the art and science of understanding and fixing problems in Machine Learning models, is so critical to the future of Machine Learning. Without being able to troubleshoot models when they under perform or misbehave, organizations simply won’t be able to adopt and deploy the algorithm for good and at scale. Inspired by this, we want to challenge the models that have very high accuracy, conduct model debugging and discrimination testing to discover the hidden inaccuracy. The results bring up the concern of how a seemingly reliable model can present bias that would be damaging if actually deployed in the future.

Volume None
Pages None
DOI 10.1007/978-3-030-70665-4_166
Language English
Journal Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery

Full Text