Peter Haddawy
Mahidol University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Haddawy.
Theoretical Computer Science | 1997
Liem Ngo; Peter Haddawy
Abstract We define a language for representing context-sensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure that takes a query Q and a set of evidence E and constructs a Bayesian network to compute P(Q¦E) . The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach.
Computers in Biology and Medicine | 1997
Charles E. Kahn; Linda M. Roberts; Katherine A. Shaffer; Peter Haddawy
Bayesian networks use the techniques of probability theory to reason under uncertainty, and have become an important formalism for medical decision support systems. We describe the development and validation of a Bayesian network (MammoNet) to assist in mammographic diagnosis of breast cancer. MammoNet integrates five patient-history features, two physical findings, and 15 mammographic features extracted by experienced radiologists to determine the probability of malignancy. We outline the methods and issues in the systems design, implementation, and evaluation. Bayesian networks provide a potentially useful tool for mammographic decision support.
frontiers in education conference | 2007
Nguyen Thai Nghe; Paul Janecek; Peter Haddawy
This paper compares the accuracy of decision tree and Bayesian network algorithms for predicting the academic performance of undergraduate and postgraduate students at two very different academic institutes: Can Tho University (CTU), a large national university in Viet Nam; and the Asian Institute of Technology (AIT), a small international postgraduate institute in Thailand that draws students from 86 different countries. Although the diversity of these two student populations is very different, the data-mining tools were able to achieve similar levels of accuracy for predicting student performance: 73/71% for {fail, fair, good, very good} and 94/93% for {fail, pass} at the CTU/AIT respectively. These predictions are most useful for identifying and assisting failing students at CTU (64% accurate), and for selecting very good students for scholarships at the AIT (82% accurate). In this analysis, the decision tree was consistently 3-12% more accurate than the Bayesian network. The results of these case studies give insight into techniques for accurately predicting student performance, compare the accuracy of data mining algorithms, and demonstrate the maturity of open source tools.
Artificial Intelligence | 1994
Alan M. Frisch; Peter Haddawy
Abstract This paper proposes and investigates an approach to deduction in probabilistic logic, using as its medium a language that generalizes the propositional version of Nilssons probabilistic logic by incorporating conditional probabilities. Unlike many other approaches to deduction in probabilistic logic, this approach is based on inference rules and therefore can produce proofs to explain how conclusions are drawn. We show how these rules can be incorporated into an anytime deduction procedure that proceeds by computing increasingly narrow probability intervals that contain the tightest entailed probability interval. Since the procedure can be stopped at any time to yield partial information concerning the probability range of any entailed sentence, one can make a tradeoff between precision and computation time. The deduction method presented here contrasts with other methods whose ability to perform logical reasoning is either limited or requires finding all truth assignments consistent with the given sentences.
computational intelligence | 1998
Peter Haddawy; Steve Hanks
AI planning agents are goal‐directed: success is measured in terms of whether an input goal is satisfied. The goal gives structure to the planning problem, and planning representations and algorithms have been designed to exploit that structure. Strict goal satisfaction may be an unacceptably restrictive measure of good behavior, however.
uncertainty in artificial intelligence | 1994
Peter Haddawy
We present a method for dynamically generating Bayesian networks from knowledge bases consisting of first-order probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discrete-valued nodes. We impose constraints on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of d-separation for knowledge bases and prove that a knowledge base with independence conditions defined by d-separation is a complete specification of a probability distribution. We present a network generation algorithm that, given an inference problem in the form of a query Q and a set of evidence E, generates a network to compute P(Q|E). We prove the algorithm to be correct.
IEEE Transactions on Power Systems | 1999
David C. Yu; Thanh C. Nguyen; Peter Haddawy
This paper presents an application of Bayesian networks (BN) to the problem of reliability assessment of power systems. Bayesian networks provide a flexible means of representing and reasoning with probabilistic information. Uncertainty and dependencies are easily incorporated in the analysis. Efficient probabilistic inference algorithms in Bayesian networks permit not only computation of the loss of load probability but also answering various probabilistic queries about the system. The advantages of BN models for power system reliability evaluation are demonstrated through examples. Results of a reliability case study of a multi-area test system are also reported.
Ai Magazine | 1999
Peter Haddawy
The last few years have seen a surge in interest in the use of techniques from Bayesian decision theory to address problems in AI. Decision theory provides a normative framework for representing and reasoning about decision problems under uncertainty. Within the context of this framework, researchers in uncertainty in the AI community have been developing computational techniques for building rational agents and representations suited to engineering their knowledge bases. This special issue reviews recent research in Bayesian problem-solving techniques. The articles cover the topics of inference in Bayesian networks, decision-theoretic planning, and qualitative decision theory. Here, I provide a brief introduction to Bayesian networks and then cover applications of Bayesian problem-solving techniques, knowledge-based model construction and structured representations, and the learning of graphic probability models.
intelligent user interfaces | 2004
Siriwan Suebnukarn; Peter Haddawy
This paper describes COMET, a collaborative intelligent tutoring system for medical problem-based learning. The system uses Bayesian networks to model individual student knowledge and activity, as well as that of the group. It incorporates a multi-modal interface that integrates text and graphics so as to provide a rich communication channel between the students and the system, as well as among students in the group. Students can sketch directly on medical images, search for medical concepts, and sketch hypotheses on a shared workspace. The prototype system incorporates substantial domain knowledge in the area of head injury diagnosis. A major challenge in building COMET has been to develop algorithms for generating tutoring hints. Tutoring in PBL is particularly challenging since the tutor should provide as little guidance as possible while at the same time not allowing the students to get lost. From studies of PBL sessions at a local medical school, we have identified and implemented eight commonly used hinting strategies. We compared the tutoring hints generated by COMET with those of experienced human tutors. Our results show that COMETs hints agree with the hints of the majority of the human tutors with a high degree of statistical agreement (McNemar test, p = 0.652, Kappa = 0.773).
ACSC '95 Proceedings of the 1995 Asian Computing Science Conference on Algorithms, Concurrency and Knowledge | 1995
Liem Ngo; Peter Haddawy
We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative logic programming. We define a fixpoint theory, declarative semantics, and proof procedure for the new class of probabilistic logic programs. Compared to other approaches to quantitative logic programming, we provide a true probabilistic framework with potential applications in probabilistic expert systems and decision support systems. We also discuss the relationship between such programs and Bayesian networks, thus moving toward a unification of two major approaches to automated reasoning.