Proceedings of the ACM Workshop on Crossmodal Learning and Application | 2019
Some Shades of Grey!: Interpretability and Explainability of Deep Neural Networks
Abstract
Based on the availability of data and corresponding computing capacity, more and more cognitive tasks can be transferred to computers, which independently learn to improve our understanding, increase our problem-solving capacity or simply help us to remember connections. Deep neural networks in particular clearly outperform traditional AI methods and thus find more and more areas of application where they are involved in decision-making or even make decisions independently. For many areas, such as autonomous driving or credit allocation, the use of such networks is extremely critical and risky due to their black box character, since it is difficult to interpret how or why the models come to certain results. The paper discusses and presents various approaches that attempt to understand and explain decision-making in deep neural networks.