Featured Researches

Artificial Intelligence

Measuring the Complexity of Domains Used to Evaluate AI Systems

There is currently a rapid increase in the number of challenge problem, benchmarking datasets and algorithmic optimization tests for evaluating AI systems. However, there does not currently exist an objective measure to determine the complexity between these newly created domains. This lack of cross-domain examination creates an obstacle to effectively research more general AI systems. We propose a theory for measuring the complexity between varied domains. This theory is then evaluated using approximations by a population of neural network based AI systems. The approximations are compared to other well known standards and show it meets intuitions of complexity. An application of this measure is then demonstrated to show its effectiveness as a tool in varied situations. The experimental results show this measure has promise as an effective tool for aiding in the evaluation of AI systems. We propose the future use of such a complexity metric for use in computing an AI system's intelligence.

Read more
Artificial Intelligence

Merging with unknown reliability

Merging beliefs depends on the relative reliability of their sources. When unknown, assuming equal reliability is unwarranted. The solution proposed in this article is that every reliability profile is possible, and only what holds according to all is accepted. Alternatively, one source is completely reliable, but which one is unknown. These two cases motivate two existing forms of merging: maxcons-based merging and arbitration.

Read more
Artificial Intelligence

Mitigating Negative Side Effects via Environment Shaping

Agents operating in unstructured environments often produce negative side effects (NSE), which are difficult to identify at design time. While the agent can learn to mitigate the side effects from human feedback, such feedback is often expensive and the rate of learning is sensitive to the agent's state representation. We examine how humans can assist an agent, beyond providing feedback, and exploit their broader scope of knowledge to mitigate the impacts of NSE. We formulate this problem as a human-agent team with decoupled objectives. The agent optimizes its assigned task, during which its actions may produce NSE. The human shapes the environment through minor reconfiguration actions so as to mitigate the impacts of the agent's side effects, without affecting the agent's ability to complete its assigned task. We present an algorithm to solve this problem and analyze its theoretical properties. Through experiments with human subjects, we assess the willingness of users to perform minor environment modifications to mitigate the impacts of NSE. Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE, without affecting the agent's ability to complete its assigned task.

Read more
Artificial Intelligence

Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modeling the human explainee via Bayesian Teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal. We assess Bayesian Teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI's classifications will match their own, but explanations generated by Bayesian Teaching improve their ability to predict the AI's judgements by moving them away from this prior belief. Bayesian Teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.

Read more
Artificial Intelligence

Modeling Global Semantics for Question Answering over Knowledge Bases

Semantic parsing, as an important approach to question answering over knowledge bases (KBQA), transforms a question into the complete query graph for further generating the correct logical query. Existing semantic parsing approaches mainly focus on relations matching with paying less attention to the underlying internal structure of questions (e.g., the dependencies and relations between all entities in a question) to select the query graph. In this paper, we present a relational graph convolutional network (RGCN)-based model gRGCN for semantic parsing in KBQA. gRGCN extracts the global semantics of questions and their corresponding query graphs, including structure semantics via RGCN and relational semantics (label representation of relations between entities) via a hierarchical relation attention mechanism. Experiments evaluated on benchmarks show that our model outperforms off-the-shelf models.

Read more
Artificial Intelligence

Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenes

Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images, but are static, i.e., they provide no information about the time-sequence of fixations. Nowadays, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to a visual task, such as searching for a given target. Bayesian observer models have been proposed for this task, as they represent visual search as an active sampling process. Nevertheless, they were mostly evaluated on artificial images, and how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes recording eye movements. We show that, although state-of-the-art saliency models perform well in predicting the first two fixations in a visual search task, their performance degrades to chance afterward. This suggests that saliency maps alone are good to model bottom-up first impressions, but are not enough to explain the scanpaths when top-down task information is critical. Thus, we propose to use them as priors of Bayesian searchers. This approach leads to a behavior very similar to humans for the whole scanpath, both in the percentage of target found as a function of the fixation rank and the scanpath similarity, reproducing the entire sequence of eye movements.

Read more
Artificial Intelligence

Modular Design Patterns for Hybrid Learning and Reasoning Systems: a taxonomy, patterns and use cases

The unification of statistical (data-driven) and symbolic (knowledge-driven) methods is widely recognised as one of the key challenges of modern AI. Recent years have seen large number of publications on such hybrid neuro-symbolic AI systems. That rapidly growing literature is highly diverse and mostly empirical, and is lacking a unifying view of the large variety of these hybrid systems. In this paper we analyse a large body of recent literature and we propose a set of modular design patterns for such hybrid, neuro-symbolic systems. We are able to describe the architecture of a very large number of hybrid systems by composing only a small set of elementary patterns as building blocks. The main contributions of this paper are: 1) a taxonomically organised vocabulary to describe both processes and data structures used in hybrid systems; 2) a set of 15+ design patterns for hybrid AI systems, organised in a set of elementary patterns and a set of compositional patterns; 3) an application of these design patterns in two realistic use-cases for hybrid AI systems. Our patterns reveal similarities between systems that were not recognised until now. Finally, our design patterns extend and refine Kautz' earlier attempt at categorising neuro-symbolic architectures.

Read more
Artificial Intelligence

Modular Object-Oriented Games: A Task Framework for Reinforcement Learning, Psychology, and Neuroscience

In recent years, trends towards studying simulated games have gained momentum in the fields of artificial intelligence, cognitive science, psychology, and neuroscience. The intersections of these fields have also grown recently, as researchers increasing study such games using both artificial agents and human or animal subjects. However, implementing games can be a time-consuming endeavor and may require a researcher to grapple with complex codebases that are not easily customized. Furthermore, interdisciplinary researchers studying some combination of artificial intelligence, human psychology, and animal neurophysiology face additional challenges, because existing platforms are designed for only one of these domains. Here we introduce Modular Object-Oriented Games, a Python task framework that is lightweight, flexible, customizable, and designed for use by machine learning, psychology, and neurophysiology researchers.

Read more
Artificial Intelligence

Modular approach to data preprocessing in ALOHA and application to a smart industry use case

Applications in the smart industry domain, such as interaction with collaborative robots using vocal commands or machine vision systems often requires the deployment of deep learning algorithms on heterogeneous low power computing platforms. The availability of software tools and frameworks to automatize different design steps can support the effective implementation of DL algorithms on embedded systems, reducing related effort and costs. One very important aspect for the acceptance of the framework, is its extensibility, i.e. the capability to accommodate different datasets and define customized preprocessing, without requiring advanced skills. The paper addresses a modular approach, integrated into the ALOHA tool flow, to support the data preprocessing and transformation pipeline. This is realized through customizable plugins and allows the easy extension of the tool flow to encompass new use cases. To demonstrate the effectiveness of the approach, we present some experimental results related to a keyword spotting use case and we outline possible extensions to different use cases.

Read more
Artificial Intelligence

Monotonicity in practice of adaptive testing

In our previous work we have shown how Bayesian networks can be used for adaptive testing of student skills. Later, we have taken the advantage of monotonicity restrictions in order to learn models fitting data better. This article provides a synergy between these two phases as it evaluates Bayesian network models used for computerized adaptive testing and learned with a recently proposed monotonicity gradient algorithm. This learning method is compared with another monotone method, the isotonic regression EM algorithm. The quality of methods is empirically evaluated on a large data set of the Czech National Mathematics Exam. Besides advantages of adaptive testing approach we observed also advantageous behavior of monotonic methods, especially for small learning data set sizes. Another novelty of this work is the use of the reliability interval of the score distribution, which is used to predict student's final score and grade. In the experiments we have clearly shown we can shorten the test while keeping its reliability. We have also shown that the monotonicity increases the prediction quality with limited training data sets. The monotone model learned by the gradient method has a lower question prediction quality than unrestricted models but it is better in the main target of this application, which is the student score prediction. It is an important observation that a mere optimization of the model likelihood or the prediction accuracy do not necessarily lead to a model that describes best the student.

Read more

Ready to get started?

Join us today