Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Max Henrion is active.

Publication


Featured researches published by Max Henrion.


uncertainty in artificial intelligence | 1988

Propagating Uncertainty in Bayesian Networks by Probabilistic Logic Sampling

Max Henrion

Bayesian belief networks and influence diagrams are attractive approaches for representing uncertain expert knowledge in coherent probabilistic form. But current algorithms for propagating updates are either restricted to singly connected networks (Chow trees), as the scheme of Pearl and Kim, or they are liable to exponential complexity when dealing with multiply connected networks. Probabilistic logic sampling is a new scheme employing stochastic simulation which can make probabilistic inferences in large, multiply connected networks, with an arbitrary degree of precision controlled by the sample size. A prototype implementation, named Pulse, is illustrated, which provides efficient methods to estimate conditional probabilities, perform systematic sensitivity analysis, and compute evidence weights to explain inferences.


International Journal of Approximate Reasoning | 1988

Decision theory in expert systems and artificial intelligence

Eric Horvitz; John S. Breese; Max Henrion

Abstract Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision-theoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decision-theoretic representation and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expert-system paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expert-systems setting, yet are crucial for developing theoretical methods and computational architectures for automated reasoners.


Computer-aided Design | 1979

Geometric modelling: a survey

A. Baer; Charles M. Eastman; Max Henrion

Abstract Computer programs are being developed to aid the design of physical systems ranging from individual mechanical parts to entire buildings or ships. These efforts highlight the importance of computer models of three dimensional objects. Issues and alternatives in geometric modelling are discussed and illustrated with comparisons of 11 existing modelling systems, in particular coherently-structured models of polyhedral solids where the faces may be either planar or curved. Four categories of representation are distinguished: data representations that store full, explicit shape information; definition languages with which the user can enter descriptions of shapes into the system, and which can constitute procedural representations; special subsets of the information produced by application programs; and conceptual models that define the logical structure of the data representation and/or definition language.


Ai Magazine | 1991

Decision analysis and expert systems

Max Henrion; John S. Breese; Eric Horvitz

Decision analysis and knowledge-based expert systems share some common goals. Both technologies are designed to improve human decision making; they attempt to do this by formalizing human expert knowledge so that it is amenable to mechanized reasoning. However, the technologies are based on rather different principles. Decision analysis is the application of the principles of decision theory supplemented with insights from the psychology of judgment. Expert systems, at least as we use this term here, involve the application of various logical and computational techniques of AI to the representation of human knowledge for automated inference. AI and decision theory both emerged from research on systematic methods for problem solving and decision making that first blossomed in the 1940s. They even share a common progenitor, John von Neumann, who was a coauthor with Oscar Morgenstern of the best-known formulation of decision theory as well a key player in the development


uncertainty in artificial intelligence | 1994

Knowledge engineering for large belief networks

Malcolm Pradhan; Gregory M. Provan; Blackford Middleton; Max Henrion

We present several techniques for knowledge engineering of large belief networks (BNs) based on the our experiences with a network derived from a large medical knowledge base. The noisy-MAX, a generalization of the noisy-OR gate, is used to model causal independence in a BN with multivalued variables. We describe the use of leak probabilities to enforce the closed-world assumption in our model. We present Netview, a visualization tool based on causal independence and the use of leak probabilities. The Netview software allows knowledge engineers to dynamically view subnetworks for knowledge engineering, and it provides version control for editing a BN. Netview generates sub-networks in which leak probabilities are dynamically updated to reflect the missing portions of the network.


Artificial Intelligence | 1996

The sensitivity of belief networks to imprecise probabilities: an experimental investigation

Malcolm Pradhan; Max Henrion; Gregory M. Provan; Brendan Del Favero; Kurt Huang

Abstract Bayesian belief networks are being increasingly used as a knowledge representation for reasoning under uncertainty. Some researchers have questioned the practicality of obtaining the numerical probabilities with sufficient precision to create belief networks for large-scale applications. In this work, we investigate how precise the probabilities need to be by measuring how imprecision in the probabilities affects diagnostic performance. We conducted a series of experiments on a set of real-world belief networks for medical diagnosis in liver and bile disease. We examined the effects on diagnostic performance of (1) varying the mappings from qualitative frequency weights into numerical probabilities, (2) adding random noise to the numerical probabilities, (3) simplifying from quaternary domains for diseases and findings—absent, mild, moderate, and severe—to binary domains—absent and present, and (4) using test cases that contain diseases outside the network. We found that even extreme differences in the probability mappings and large amounts of noise lead to only modest reductions in diagnostic performance. We found no significant effect of the simplification from quaternary to binary representation. We also found that outside diseases degraded performance modestly. Overall, these findings indicate that even highly imprecise input probabilities may not impair diagnostic performance significantly, and that simple binary representations may often be adequate. These findings of robustness suggest that belief networks are a practical representation without requiring undue precision.


uncertainty in artificial intelligence | 1990

A comparison of decision alaysis and expert rules for sequential diagnosis

Jayant R. Kalagnanam; Max Henrion

Abstract There has long been debate about the relative merits of decision theoretic methods and heuristic rule-based approaches for reasoning under uncertainty. We report an experimental comparison of the performance of the two approaches to troubleshooting, specifically to test selection for fault diagnosis. We use as experimental testbed the problem of diagnosing motorcycle engines. The first approach employs heuristic test selection rules obtained from expert mechanics. We compare it with the optimal decision analytic algorithm for test selection which employs estimated component failure probabilities and test costs. The decision analytic algorithm was found to reduce the expected cost (i.e. time) to arrive at a diagnosis by an average of 14% relative to the expert rules. Sensitivity analysis shows the results are quite robust to inaccuracy in the probability and cost estimates. This difference suggests some interesting implications for knowledge acquisition.


uncertainty in artificial intelligence | 1990

An Introduction to Algorithms for Inference in Belief Nets

Max Henrion

Abstract As belief nets are applied to represent larger and more complex knowledge bases, the development of more efficient inference algorithms is becoming increasingly urgent. A brief survey of different approaches is presented to provide a framework for understanding the following papers in this section.


Expert judgment and expert systems | 1987

Uncertainty in artificial intelligence: Is probability epistemologically and heuristically accurate?

Max Henrion

Historically, probability has been by far the most widely used formalism for representing uncertainty. However, the majority of AI researchers have not, hitherto, found standard probabilistic techniques very appealing for use in rule-based, expert systems. Among the many alternative numerical schemes for quantifying uncertainty that have been developed are the Certainty Factors used in Mycin (Shortliffe & Buchanan, 1975) and its descendants, Fuzzy Set Theory (Zadeh, 1984), the quasi-probabilistic scheme of Prospector (Duda et al., 1976), and the Belief functions of Dempster-Shafer theory (Shafer, 1976). There have also been attempts to develop non-numerical schemes, including Paul Cohen’s theory of endorsements (Cohen, 1985), Doyle’s theory of reasoned assumptions (Doyle, 1983), and various linguistic representations of uncertainty (Fox, 1986). We shall refer to both probabilistic and alternative methods, generically, as uncertain inference schemes, or UISs.


uncertainty in artificial intelligence | 1994

An experimental comparison of numerical and qualitative probabilistic reasoning

Max Henrion; Gregory M. Provan; Brendan Del Favero; Gillian D Sanders

Qualitative and infinitesimal probability schemes are consistent with the axioms of probability theory, but avoid the need for precise numerical probabilities. Using qualitative probabilities could substantially reduce the effort for knowledge engineering and improve the robustness of results. We examine experimentally how well infinitesimal probabilities (the kappa-calculus of Goldszmidt and Pearl) perform a diagnostic task -- troubleshooting a car that will not start -- by comparison with a conventional numerical belief network. We found the infinitesimal scheme to be as good as the numerical scheme in identifying the true fault. The performance of the infinitesimal scheme worsens significantly for prior fault probabilities greater than 0.03. These results suggest that infinitesimal probability methods may be of substantial practical value for machine diagnosis with small prior fault probabilities.

Collaboration


Dive into the Max Henrion's collaboration.

Top Co-Authors

Avatar

M. Granger Morgan

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mitchell J. Small

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Charles M. Eastman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John F. Lemmer

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Marek J. Druzdzel

Bialystok University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deborah Amaral

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge