Archive | 2021

Reasoning-Based Learning of Interpretable ML Models

 
 
 
 

Abstract


Artificial Intelligence (AI) is widely used in decision making procedures in myriads of real-world applications across important practical areas such as finance, healthcare, education, and safety critical systems. Due to its ubiquitous use in safety and privacy critical domains, it is often vital to understand the reasoning behind the AI decisions, which motivates the need for explainable AI (XAI). One of the major approaches to XAI is represented by computing so-called interpretable machine learning (ML) models, such as decision trees (DT), decision lists (DL) and decision sets (DS). These models build on the use of if-then rules and are thus deemed to be easily understandable by humans. A number of approaches have been proposed in the recent past to devising all kinds of interpretable ML models, the most prominent of which involve encoding the problem into a logic formalism, which is then tackled by invoking a reasoning or discrete optimization procedure. This paper overviews the recent advances of the reasoning and constraints based approaches to learning interpretable ML models and discusses their advantages and limitations.

Volume None
Pages 4458-4465
DOI 10.24963/ijcai.2021/608
Language English
Journal None

Full Text