Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changhe Yuan is active.

Publication


Featured researches published by Changhe Yuan.


Mathematical and Computer Modelling | 2006

Importance sampling algorithms for Bayesian networks: Principles and performance

Changhe Yuan; Marek J. Druzdzel

Precision achieved by stochastic sampling algorithms for Bayesian networks typically deteriorates in the face of extremely unlikely evidence. In addressing this problem, importance sampling algorithms seem to be most successful. We discuss the principles underlying the importance sampling algorithms in Bayesian networks. After that, we describe Evidence Pre-propagation Importance Sampling (EPIS-BN), an importance sampling algorithm that computes an importance function using two techniques: loopy belief propagation [K. Murphy, Y. Weiss, M. Jordan, Loopy belief propagation for approximate inference: An empirical study, in: Proceedings of the Fifteenth Annual Conference on Uncertainty in Artificial Intelligence, UAI-99, San Francisco, CA, Morgan Kaufmann Publishers, 1999, pp. 467-475; Y. Weiss, Correctness of local probability propagation in graphical models with loops, Neural Computation 12 (1) (2000) 1-41] and @e-cutoff heuristic [J. Cheng, M.J. Druzdzel, BN-AIS: An adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks, Journal of Artificial Intelligence Research 13 (2000) 155-188]. We tested the performance of EPIS-BN on three large real Bayesian networks and observed that on all three networks it outperforms AIS-BN [J. Cheng, M.J. Druzdzel, BN-AIS: An adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks, Journal of Artificial Intelligence Research 13 (2000) 155-188], the current state-of-the-art algorithm, while avoiding its costly learning stage. We also compared EPIS-BN Gibbs sampling and discuss the role of the @e-cutoff heuristic in importance sampling for Bayesian networks. networks.


international joint conference on artificial intelligence | 2011

Learning optimal Bayesian networks using A* search

Changhe Yuan; Brandon M. Malone; Xiaojian Wu

This paper formulates learning optimal Bayesian network as a shortest path finding problem. An A* search algorithm is introduced to solve the problem. With the guidance of a consistent heuristic, the algorithm learns an optimal Bayesian network by only searching the most promising parts of the solution space. Empirical results show that the A* search algorithm significantly improves the time and space efficiency of existing methods on a set of benchmark datasets.


BMC Bioinformatics | 2012

Empirical evaluation of scoring functions for Bayesian network model selection

Zhifa Liu; Brandon M. Malone; Changhe Yuan

In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaikes information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also tested a greedy hill climbing algorithm and observed similar results as the optimal algorithm.


Journal of Artificial Intelligence Research | 2011

Most relevant explanation in Bayesian networks

Changhe Yuan; Heejin Lim; Tsai-Ching Lu

A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.


International Journal of Approximate Reasoning | 2007

Theoretical analysis and practical insights on importance sampling in Bayesian networks

Changhe Yuan; Marek J. Druzdzel

The AIS-BN algorithm [J. Cheng, M.J. Druzdzel, BN-AIS: An adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks, Journal of Artificial Intelligence Research 13 (2000) 155-188] is a successful importance sampling-based algorithm for Bayesian networks that relies on two heuristic methods to obtain an initial importance function: @e-cutoff, replacing small probabilities in the conditional probability tables by a larger @e, and setting the probability distributions of the parents of evidence nodes to uniform. However, why the simple heuristics are so effective was not well understood. In this paper, we point out that it is due to a practical requirement for the importance function, which says that a good importance function should possess thicker tails than the actual posterior probability distribution. By studying the basic assumptions behind importance sampling and the properties of importance sampling in Bayesian networks, we develop several theoretical insights into the desirability of thick tails for importance functions. These insights not only shed light on the success of the two heuristics of AIS-BN, but also provide a common theoretical basis for several other successful heuristic methods.


canadian conference on artificial intelligence | 2007

Improving Importance Sampling by Adaptive Split-Rejection Control in Bayesian Networks

Changhe Yuan; Marek J. Druzdzel

Importance sampling-based algorithms are a popular alternative when Bayesian network models are too large or too complex for exact algorithms. However, importance sampling is sensitive to the quality of the importance function. A bad importance function often leads to much oscillation in the sample weights, and, hence, poor estimation of the posterior probability distribution. To address this problem, we propose the adaptive split-rejection controltechnique to adjust the samples with extremely large or extremely small weights, which contribute most to the variance of an importance sampling estimator. Our results show that when we adopt this technique in the EPIS-BN algorithm[14], adaptive split-rejection control helps to achieve significantly better results.


uncertainty in artificial intelligence | 2002

An importance sampling algorithm based on evidence pre-propagation

Changhe Yuan; Marek J. Druzdzel


uncertainty in artificial intelligence | 2004

Annealed MAP

Changhe Yuan; Tsai-Ching Lu; Marek J. Druzdzel


uncertainty in artificial intelligence | 2011

Improving the scalability of optimal Bayesian network learning with external-memory frontier breadth-first branch and bound search

Brandon M. Malone; Changhe Yuan; Eric A. Hansen; Susan M. Bridges


international joint conference on artificial intelligence | 2009

Efficient computation of jointree bounds for systematic MAP search

Changhe Yuan; Eric A. Hansen

Collaboration


Dive into the Changhe Yuan's collaboration.

Top Co-Authors

Avatar

Marek J. Druzdzel

Bialystok University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brandon M. Malone

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Eric A. Hansen

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Xiaojian Wu

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Xiaoxun Sun

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Susan M. Bridges

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Xiaolu Liu

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge