Jhonatan de S. Oliveira
University of Regina
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jhonatan de S. Oliveira.
probabilistic graphical models | 2014
Cory J. Butz; Jhonatan de S. Oliveira; Anders L. Madsen
Variable Elimination (VE) answers a query posed to a Bayesian network (BN) by manipulating the conditional probability tables of the BN. Each successive query is answered in the same manner. In this paper, we present an inference algorithm that is aimed at maximizing the reuse of past computation but does not involve precomputation. Compared to VE and a variant of VE incorporating precomputation, our approach fairs favourably in preliminary experimental results.
computational intelligence | 2017
Cory J. Butz; Jhonatan de S. Oliveira; André E. dos Santos
We suggest Darwinian Networks (DNs) as a simplification of working with Bayesian networks (BNs). DNs adapt a handful of well‐known concepts in biology into a single framework that is surprisingly simple yet remarkably robust. With respect to modeling, on one hand, DNs not only represent BNs but also faithfully represent the testing of independencies in a more straightforward fashion. On the other hand, with respect to three exact inference algorithms in BNs, DNs simplify each of them while unifying all of them. DNs can determine good elimination orderings using the same platform as used for modeling and inference. Finally, we demonstrate how DNs can represent two additional frameworks. Practical benefits of DNs include faster algorithms for inference and modeling.
International Journal of Approximate Reasoning | 2016
Cory J. Butz; Jhonatan de S. Oliveira; Anders L. Madsen
Variable elimination (VE) and join tree propagation (JTP) are two alternatives to inference in Bayesian networks (BNs). VE, which can be viewed as one-way propagation in a join tree, answers each query against the BN meaning that computation can be repeated. On the other hand, answering a single query with JTP involves two-way propagation, of which some computation may remain unused. In this paper, we propose marginal tree inference (MTI) as a new approach to exact inference in discrete BNs. MTI seeks to avoid recomputation, while at the same time ensuring that no constructed probability information remains unused. Thereby, MTI stakes out middle ground between VE and JTP. The usefulness of MTI is demonstrated in multiple probabilistic reasoning sessions.
international conference industrial, engineering & other applications applied intelligent systems | 2018
Cory J. Butz; André E. dos Santos; Jhonatan de S. Oliveira; John Stavrinides
This paper describes a novel approach to study bacterial relationships in soil datasets using probabilistic graphical models. We demonstrate how to access and reformat publicly available datasets in order to apply machine learning techniques. We first learn a Bayesian network in order to read independencies in linear time between bacterial community characteristics. These independencies are useful in understanding the semantic relationships between bacteria within communities. Next, we learn a Sum-Product network in order to perform inference in linear time. Here, inference can be conducted to answer traditional queries, involving posterior probabilities, or MPE queries, requesting the most likely values of the non-evidence variables given evidence. Our results extend the literature by showing that known relationships between soil bacteria holding in one or a few datasets in fact hold across at least 3500 diverse datasets. This study paves the way for future large-scale studies of agricultural, health, and environmental applications, for which data are publicly available.
computational intelligence | 2018
Cory J. Butz; André E. dos Santos; Jhonatan de S. Oliveira; Christophe Gonzales
Testing independencies is a fundamental task in reasoning with Bayesian networks (BNs). In practice, d‐separation is often used for this task, since it has linear‐time complexity. However, many have had difficulties understanding d‐separation in BNs. An equivalent method that is easier to understand, called m‐separation, transforms the problem from directed separation in BNs into classical separation in undirected graphs. Two main steps of this transformation are pruning the BN and adding undirected edges.
International Journal of Approximate Reasoning | 2018
Cory J. Butz; Jhonatan de S. Oliveira; André E. dos Santos; Anders L. Madsen
Abstract We propose Simple Propagation (SP) as a new join tree propagation algorithm for exact inference in discrete Bayesian networks. We establish the correctness of SP. The striking feature of SP is that its message construction exploits the factorization of potentials at a sending node, but without the overhead of building and examining graphs as done in Lazy Propagation (LP). Experimental results on optimal (or close to optimal) join trees built from numerous benchmark Bayesian networks show that SP is often faster than LP.
International Journal of Approximate Reasoning | 2018
Cory J. Butz; André E. dos Santos; Jhonatan de S. Oliveira; Christophe Gonzales
Abstract Directed separation (d-separation) played a fundamental role in the founding of Bayesian networks (BNs) and continues to be useful today in a wide range of applications. Given an independence to be tested, current implementations of d-separation explore the active part of a BN. On the other hand, an overlooked property of d-separation implies that d-separation need only consider the relevant part of a BN. We propose a new method for testing independencies in BNs, called relevant path separation (rp-separation), which explores the intersection between the active and relevant parts of a BN. Favourable experimental results are reported.
canadian conference on artificial intelligence | 2017
Jhonatan de S. Oliveira; Cory J. Butz; André E. dos Santos
Sum-product networks (SPNs) are a deep learning model that have shown impressive results in several artificial intelligence applications. Tractable inference in practice requires an SPN to be complete and either consistent or decomposable. These properties can be verified using the definition of scope. In fact, the notion of scope can be used to define SPNs when they are interpreted as hierarchically structured latent variables in mixture models.
canadian conference on artificial intelligence | 2017
André E. dos Santos; Cory J. Butz; Jhonatan de S. Oliveira
Sum-Product Networks (SPNs) are a probabilistic graphical model with deep learning applications. A key feature in an SPN is that inference is linear with respect to the size of the network under certain structural constraints. Initial studies of SPNs have investigated transforming SPNs into Bayesian Networks (BNs). Two such methods modify the SPN before conversion. One method modifies the SPN into a normal form. The resulting BN does not contain edges between latent variables. The other method considered here augments the SPN with twin nodes. Here, the constructed BN does contain edges between latent variables, thereby encoding a richer set of dependencies among them.
canadian conference on artificial intelligence | 2016
Cory J. Butz; André E. dos Santos; Jhonatan de S. Oliveira; Christophe Gonzales
Testing independencies is a fundamental task in reasoning with Bayesian networks BNs. In practice, d-separation is often utilized for this task, since it has linear-time complexity. However, many have had difficulties in understanding d-separation in BNs. An equivalent method that is easier to understand, called m-separation, transforms the problem from directed separation in BNs into classical separation in undirected graphs. Two main steps of this transformation are pruning the BN and adding undirected edges. In this paper, we propose u-separation as an even simpler method for testing independencies in a BN. Our approach also converts the problem into classical separation in an undirected graph. However, our method is based upon the novel concepts of inaugural variables and rationalization. Thereby, the primary advantage of u-separation over m-separation is that m-separation can prune unnecessarily and add superfluous edges. Hence, u-separation is a simpler method in this respect.