Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory F. Cooper is active.

Publication


Featured researches published by Gregory F. Cooper.


Machine Learning | 1992

A Bayesian Method for the Induction of Probabilistic Networks from Data

Gregory F. Cooper; Edward H. Herskovits

This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.


Artificial Intelligence | 1990

The computational complexity of probabilistic inference using Bayesian belief networks (research note)

Gregory F. Cooper

Abstract Bayesian belief networks provide a natural, efficient method for representing probabilistic dependencies among a set of variables. For these reasons, numerous researchers are exploring the use of belief networks as a knowledge representation in artificial intelligence. Algorithms have been developed previously for efficient probabilistic inference using special classes of belief networks. More general classes of belief networks, however, have eluded efforts to develop efficient inference algorithms. We show that probabilistic inference using belief networks is NP-hard. Therefore, it seems unlikely that an exact algorithm can be developed to perform probabilistic inference efficiently over all classes of belief networks. This result suggests that research should be directed away from the search for a general, efficient probabilistic inference algorithm, and toward the design of efficient special-case, average-case, and approximation algorithms.


Journal of Biomedical Informatics | 2001

A Simple Algorithm for Identifying Negated Findings and Diseases in Discharge Summaries

Wendy W. Chapman; Will Bridewell; Paul Hanbury; Gregory F. Cooper; Bruce G. Buchanan

Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.


artificial intelligence in medicine in europe | 1989

The ALARM Monitoring System: A Case Study with two Probabilistic Inference Techniques for Belief Networks

Ingo A. Beinlich; Henri J. Suermondt; R. Martin Chavez; Gregory F. Cooper

ALARM (A Logical Alarm Reduction Mechanism) is a diagnostic application used to explore probabilistic reasoning techniques in belief networks. ALARM implements an alarm message system for patient monitoring; it calculates probabilities for a differential diagnosis based on available evidence. The medical knowledge is encoded in a graphical structure connecting 8 diagnoses, 16 findings and 13 intermediate variables. Two algorithms were applied to this belief network: (1) a message-passing algorithm by Pearl for probability updating in multiply connected networks using the method of conditioning; and (2) the Lauritzen-Spiegelhalter algorithm for local probability computations on graphical structures. The characteristics of both algorithms are analyzed and their specific applications and time complexities are shown.


Archive | 2006

A Bayesian Approach to Causal Discovery

David Heckerman; Christopher Meek; Gregory F. Cooper

We examine the Bayesian approach to the discovery of causal DAG models and compare it to the constraint-based approach. Both approaches rely on the Causal Markov condition, but the two differ significantly in theory and practice. An important difference between the approaches is that the constraint-based approach uses categorical information about conditional-independence constraints in the domain, whereas the Bayesian approach weighs the degree to which such constraints hold. As a result, the Bayesian approach has three distinct advantages over its constraint-based counterpart. One, conclusions derived from the Bayesian approach are not susceptible to incorrect categorical decisions about independence facts that can occur with data sets of finite size. Two, using the Bayesian approach, finer distinctions among model structures—both quantitative and qualitative—can be made. Three, information from several models can be combined to make better inferences and to better account for modeling uncertainty. In addition to describing the general Bayesian approach to causal discovery, we review approximation methods for missing data and hidden variables, and illustrate differences between the Bayesian and constraint-based methods using artificial and real examples.


uncertainty in artificial intelligence | 1991

A Bayesian method for constructing Bayesian belief networks from databases

Gregory F. Cooper; Edward H. Herskovits

This paper presents a Bayesian method for constructing Bayesian belief networks from a database of cases. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. We relate the methods in this paper to previous work, and we discuss open problems.


Data Mining and Knowledge Discovery | 1997

A Simple Constraint-Based Algorithm for Efficiently Mining Observational Databases for Causal Relationships

Gregory F. Cooper

This paper presents a simple, efficient computer-based method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observational. There is great potential for mining such databases to discover causal relationships. We illustrate how observational data can constrain the causal relationships among measured variables, sometimes to the point that we can conclude that one variable is causing another variable. The presentation here is based on a constraint-based approach to causal discovery. A primary purpose of this paper is to present the constraint-based causal discovery method in the simplest possible fashion in order to (1) readily convey the basic ideas that underlie more complex constraint-based causal discovery techniques, and (2) permit interested readers to rapidly program and apply the method to their own databases, as a start toward using more elaborate causal discovery algorithms.


Journal of the American Medical Informatics Association | 2004

Accelerating U.S. EHR Adoption: How to Get There From Here. Recommendations Based on the 2004 ACMI Retreat

Blackford Middleton; W. Ed Hammond; Patricia Flatley Brennan; Gregory F. Cooper

Despite growing support for the adoption of electronic health records (EHR) to improve U.S. healthcare delivery, EHR adoption in the United States is slow to date due to a fundamental failure of the healthcare information technology marketplace. Reasons for the slow adoption of healthcare information technology include a misalignment of incentives, limited purchasing power among providers, variability in the viability of EHR products and companies, and limited demonstrated value of EHRs in practice. At the 2004 American College of Medical Informatics (ACMI) Retreat, attendees discussed the current state of EHR adoption in this country and identified steps that could be taken to stimulate adoption. In this paper, based upon the ACMI retreat, and building upon the experiences of the authors developing EHR in academic and commercial settings we identify a set of recommendations to stimulate adoption of EHR, including financial incentives, promotion of EHR standards, enabling policy, and educational, marketing, and supporting activities for both the provider community and healthcare consumers.


Artificial Intelligence in Medicine | 1997

An Evaluation of Machine-Learning Methods for Predicting Pneumonia Mortality

Gregory F. Cooper; Constantin F. Aliferis; Richard Ambrosino; John M. Aronis; Bruce G. Buchanan; Rich Caruana; Michael J. Fine; Clark Glymour; Geoffrey J. Gordon; Barbara H. Hanusa; Janine E. Janosky; Christopher Meek; Tom M. Mitchell; Thomas S. Richardson; Peter Spirtes

This paper describes the application of eight statistical and machine-learning methods to derive computer models for predicting mortality of hospital patients with pneumonia from their findings at initial presentation. The eight models were each constructed based on 9847 patient cases and they were each evaluated on 4352 additional cases. The primary evaluation metric was the error in predicted survival as a function of the fraction of patients predicted to survive. This metric is useful in assessing a models potential to assist a clinician in deciding whether to treat a given patient in the hospital or at home. We examined the error rates of the models when predicting that a given fraction of patients will survive. We examined survival fractions between 0.1 and 0.6. Over this range, each models predictive error rate was within 1% of the error rate of every other model. When predicting that approximately 30% of the patients will survive, all the models have an error rate of less than 1.5%. The models are distinguished more by the number of variables and parameters that they contain than by their error rates; these differences suggest which models may be the most amenable to future implementation as paper-based guidelines.


Machine Learning | 2010

A multivariate Bayesian scan statistic for early event detection and characterization

Daniel B. Neill; Gregory F. Cooper

We present the multivariate Bayesian scan statistic (MBSS), a general framework for event detection and characterization in multivariate spatial time series data. MBSS integrates prior information and observations from multiple data streams in a principled Bayesian framework, computing the posterior probability of each type of event in each space-time region. MBSS learns a multivariate Gamma-Poisson model from historical data, and models the effects of each event type on each stream using expert knowledge or labeled training examples. We evaluate MBSS on various disease surveillance tasks, detecting and characterizing outbreaks injected into three streams of Pennsylvania medication sales data. We demonstrate that MBSS can be used both as a “general” event detector, with high detection power across a variety of event types, and a “specific” detector that incorporates prior knowledge of an event’s effects to achieve much higher detection power. MBSS has many other advantages over previous event detection approaches, including faster computation and easy interpretation and visualization of results, and allows faster and more accurate event detection by integrating information from the multiple streams. Most importantly, MBSS can model and differentiate between multiple event types, thus distinguishing between events requiring urgent responses and other, less relevant patterns in the data.

Collaboration


Dive into the Gregory F. Cooper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fu-Chiang Tsui

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xia Jiang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge