Featured Researches

Artificial Intelligence

A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations

Counterfactual explanations provide a potentially significant solution to the Explainable AI (XAI) problem, but good, native counterfactuals have been shown to rarely occur in most datasets. Hence, the most popular methods generate synthetic counterfactuals using blind perturbation. However, such methods have several shortcomings: the resulting counterfactuals (i) may not be valid data-points (they often use features that do not naturally occur), (ii) may lack the sparsity of good counterfactuals (if they modify too many features), and (iii) may lack diversity (if the generated counterfactuals are minimal variants of one another). We describe a method designed to overcome these problems, one that adapts native counterfactuals in the original dataset, to generate sparse, diverse synthetic counterfactuals from naturally occurring features. A series of experiments are reported that systematically explore parametric variations of this novel method on common datasets to establish the conditions for optimal performance.

Read more
Artificial Intelligence

A General Counterexample to Any Decision Theory and Some Responses

In this paper I present an argument and a general schema which can be used to construct a problem case for any decision theory, in a way that could be taken to show that one cannot formulate a decision theory that is never outperformed by any other decision theory. I also present and discuss a number of possible responses to this argument. One of these responses raises the question of what it means for two decision problems to be "equivalent" in the relevant sense, and gives an answer to this question which would invalidate the first argument. However, this position would have further consequences for how we compare different decision theories in decision problems already discussed in the literature (including e.g. Newcomb's problem).

Read more
Artificial Intelligence

A General Framework for Fairness in Multistakeholder Recommendations

Contemporary recommender systems act as intermediaries on multi-sided platforms serving high utility recommendations from sellers to buyers. Such systems attempt to balance the objectives of multiple stakeholders including sellers, buyers, and the platform itself. The difficulty in providing recommendations that maximize the utility for a buyer, while simultaneously representing all the sellers on the platform has lead to many interesting research problems.Traditionally, they have been formulated as integer linear programs which compute recommendations for all the buyers together in an \emph{offline} fashion, by incorporating coverage constraints so that the individual sellers are proportionally represented across all the recommended items. Such approaches can lead to unforeseen biases wherein certain buyers consistently receive low utility recommendations in order to meet the global seller coverage constraints. To remedy this situation, we propose a general formulation that incorporates seller coverage objectives alongside individual buyer objectives in a real-time personalized recommender system. In addition, we leverage highly scalable submodular optimization algorithms to provide recommendations to each buyer with provable theoretical quality bounds. Furthermore, we empirically evaluate the efficacy of our approach using data from an online real-estate marketplace.

Read more
Artificial Intelligence

A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously from Knowledge Graphs

The problem of knowledge graph (KG) reasoning has been widely explored by traditional rule-based systems and more recently by knowledge graph embedding methods. While logical rules can capture deterministic behavior in a KG they are brittle and mining ones that infer facts beyond the known KG is challenging. Probabilistic embedding methods are effective in capturing global soft statistical tendencies and reasoning with them is computationally efficient. While embedding representations learned from rich training data are expressive, incompleteness and sparsity in real-world KGs can impact their effectiveness. We aim to leverage the complementary properties of both methods to develop a hybrid model that learns both high-quality rules and embeddings simultaneously. Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts. These new facts are sampled and further used to refine the embedding model. Experiments on multiple benchmark datasets show the effectiveness of our method over other competitive standalone and hybrid baselines. We also show its efficacy in a sparse KG setting and finally explore the connection with negative sampling.

Read more
Artificial Intelligence

A Hybrid Neuro-Symbolic Approach for Complex Event Processing

Training a model to detect patterns of interrelated events that form situations of interest can be a complex problem: such situations tend to be uncommon, and only sparse data is available. We propose a hybrid neuro-symbolic architecture based on Event Calculus that can perform Complex Event Processing (CEP). It leverages both a neural network to interpret inputs and logical rules that express the pattern of the complex event. Our approach is capable of training with much fewer labelled data than a pure neural network approach, and to learn to classify individual events even when training in an end-to-end manner. We demonstrate this comparing our approach against a pure neural network approach on a dataset based on Urban Sounds 8K.

Read more
Artificial Intelligence

A Knowledge Compilation Map for Conditional Preference Statements-based Languages

Conditional preference statements have been used to compactly represent preferences over combinatorial domains. They are at the core of CP-nets and their generalizations, and lexicographic preference trees. Several works have addressed the complexity of some queries (optimization, dominance in particular). We extend in this paper some of these results, and study other queries which have not been addressed so far, like equivalence, thereby contributing to a knowledge compilation map for languages based on conditional preference statements. We also introduce a new parameterised family of languages, which enables to balance expressiveness against the complexity of some queries.

Read more
Artificial Intelligence

A Knowledge-based Approach for the Automatic Construction of Skill Graphs for Online Monitoring

Automated vehicles need to be aware of the capabilities they currently possess. Skill graphs are directed acylic graphs in which a vehicle's capabilities and the dependencies between these capabilities are modeled. The skills a vehicle requires depend on the behaviors the vehicle has to perform and the operational design domain (ODD) of the vehicle. Skill graphs were originally proposed for online monitoring of the current capabilities of an automated vehicle. They have also been shown to be useful during other parts of the development process, e.g. system design, system verification. Skill graph construction is an iterative, expert-based, manual process with little to no guidelines. This process is, thus, prone to errors and inconsistencies especially regarding the propagation of changes in the vehicle's intended ODD into the skill graphs. In order to circumnavigate this problem, we propose to formalize expert knowledge regarding skill graph construction into a knowledge base and automate the construction process. Thus, all changes in the vehicle's ODD are reflected in the skill graphs automatically leading to a reduction in inconsistencies and errors in the constructed skill graphs.

Read more
Artificial Intelligence

A Layer-Wise Information Reinforcement Approach to Improve Learning in Deep Belief Networks

With the advent of deep learning, the number of works proposing new methods or improving existent ones has grown exponentially in the last years. In this scenario, "very deep" models were emerging, once they were expected to extract more intrinsic and abstract features while supporting a better performance. However, such models suffer from the gradient vanishing problem, i.e., backpropagation values become too close to zero in their shallower layers, ultimately causing learning to stagnate. Such an issue was overcome in the context of convolution neural networks by creating "shortcut connections" between layers, in a so-called deep residual learning framework. Nonetheless, a very popular deep learning technique called Deep Belief Network still suffers from gradient vanishing when dealing with discriminative tasks. Therefore, this paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining, that support better discriminative performance. Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification.

Read more
Artificial Intelligence

A Literature Review of Recent Graph Embedding Techniques for Biomedical Data

With the rapid development of biomedical software and hardware, a large amount of relational data interlinking genes, proteins, chemical components, drugs, diseases, and symptoms has been collected for modern biomedical research. Many graph-based learning methods have been proposed to analyze such type of data, giving a deeper insight into the topology and knowledge behind the biomedical data, which greatly benefit to both academic research and industrial application for human healthcare. However, the main difficulty is how to handle high dimensionality and sparsity of the biomedical graphs. Recently, graph embedding methods provide an effective and efficient way to address the above issues. It converts graph-based data into a low dimensional vector space where the graph structural properties and knowledge information are well preserved. In this survey, we conduct a literature review of recent developments and trends in applying graph embedding methods for biomedical data. We also introduce important applications and tasks in the biomedical domain as well as associated public biomedical datasets.

Read more
Artificial Intelligence

A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints

Two major considerations when encoding pseudo-Boolean (PB) constraints into SAT are the size of the encoding and its propagation strength, that is, the guarantee that it has a good behaviour under unit propagation. Several encodings with propagation strength guarantees rely upon prior compilation of the constraints into DNNF (decomposable negation normal form), BDD (binary decision diagram), or some other sub-variants. However it has been shown that there exist PB-constraints whose ordered BDD (OBDD) representations, and thus the inferred CNF encodings, all have exponential size. Since DNNFs are more succinct than OBDDs, preferring encodings via DNNF to avoid size explosion seems a legitimate choice. Yet in this paper, we prove the existence of PB-constraints whose DNNFs all require exponential size.

Read more

Ready to get started?

Join us today