Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory M. Provan is active.

Publication


Featured researches published by Gregory M. Provan.


uncertainty in artificial intelligence | 1994

Knowledge engineering for large belief networks

Malcolm Pradhan; Gregory M. Provan; Blackford Middleton; Max Henrion

We present several techniques for knowledge engineering of large belief networks (BNs) based on the our experiences with a network derived from a large medical knowledge base. The noisy-MAX, a generalization of the noisy-OR gate, is used to model causal independence in a BN with multivalued variables. We describe the use of leak probabilities to enforce the closed-world assumption in our model. We present Netview, a visualization tool based on causal independence and the use of leak probabilities. The Netview software allows knowledge engineers to dynamically view subnetworks for knowledge engineering, and it provides version control for editing a BN. Netview generates sub-networks in which leak probabilities are dynamically updated to reflect the missing portions of the network.


Artificial Intelligence | 1996

The sensitivity of belief networks to imprecise probabilities: an experimental investigation

Malcolm Pradhan; Max Henrion; Gregory M. Provan; Brendan Del Favero; Kurt Huang

Abstract Bayesian belief networks are being increasingly used as a knowledge representation for reasoning under uncertainty. Some researchers have questioned the practicality of obtaining the numerical probabilities with sufficient precision to create belief networks for large-scale applications. In this work, we investigate how precise the probabilities need to be by measuring how imprecision in the probabilities affects diagnostic performance. We conducted a series of experiments on a set of real-world belief networks for medical diagnosis in liver and bile disease. We examined the effects on diagnostic performance of (1) varying the mappings from qualitative frequency weights into numerical probabilities, (2) adding random noise to the numerical probabilities, (3) simplifying from quaternary domains for diseases and findings—absent, mild, moderate, and severe—to binary domains—absent and present, and (4) using test cases that contain diseases outside the network. We found that even extreme differences in the probability mappings and large amounts of noise lead to only modest reductions in diagnostic performance. We found no significant effect of the simplification from quaternary to binary representation. We also found that outside diseases degraded performance modestly. Overall, these findings indicate that even highly imprecise input probabilities may not impair diagnostic performance significantly, and that simple binary representations may often be adequate. These findings of robustness suggest that belief networks are a practical representation without requiring undue precision.


american control conference | 1997

Modeling and diagnosis of timed discrete event systems-a factory automation example

Yi-Liang Chen; Gregory M. Provan

Detection and identification of failures is a critical task in the automatic control of large and complex systems. In the realm of discrete event systems, Sampath et al. (1995, 1996) proposed a new approach to failure diagnosis that models the logical behavior of the considered system in terms of state machines and produces an extended observer called a diagnoser for computing diagnoses. We extend this approach to the diagnosis of timed discrete event systems whose temporal and logical behavior are modeled by a framework proposed by Brandin and Wonham (1994). We use a simple real-world factory conveyor example to demonstrate our modeling and diagnosis approach.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Dynamic network construction and updating techniques for the diagnosis of acute abdominal pain

Gregory M. Provan; John R. Clarke

Computing diagnoses in domains with continuously changing data is difficult but essential aspect of solving many problems. To address this task, a dynamic influence diagram (ID) construction and updating system (DYNASTY) and its application to constructing a decision-theoretic model to diagnose acute abdominal pain, which is a domain in which the findings evolve during the diagnostic process, are described. For a system that evolves over time, DYNASTY constructs a parsimonious ID and then dynamically updates the ID, rather than constructing a new network from scratch for every time interval. In addition, DYNASTY contains algorithms that test the sensitivity of the constructed networks system parameters. The main contributions are: (1) presenting an efficient temporal influence diagram technique based on parsimonious model construction; and (2) formalizing the principles underlying a diagnostic tool for acute abdominal pain that explicitly models time-varying findings. >


Journal of Theoretical Biology | 2010

Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism ☆

Ronan M. T. Fleming; Ines Thiele; Gregory M. Provan; Heinz-Peter Nasheuer

The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in Escherichia coli and compare favourably with in silico prediction by flux balance analysis.


conference on decision and control | 1999

Model-based diagnosis and control reconfiguration for discrete event systems: an integrated approach

Gregory M. Provan; Yi-Liang Chen

We describe an approach to automate the dynamic computation of optimal control/reconfiguration actions that can achieve pre-specified control objectives. This approach, based on model-based diagnostic representations and algorithms, integrates diagnostics and control reconfiguration for discrete event systems using a single modeling mechanism and suite of algorithms. When the system functionality degrades (i.e., failures occur in the systems), the diagnostic algorithm will isolate the most likely failures, and then the control mechanism will generate the least-cost actions that attempt to recover from the failure and maintain the control objectives. Results about the quality of the control actions generated and the complexity of computing these actions ate also presented. We illustrate our approach using a simple wireless sensor network.


international conference on artificial intelligence and statistics | 1996

Learning Bayesian Networks Using Feature Selection

Gregory M. Provan; Moninder Singh

This paper introduces a novel enhancement for learning Bayesian networks with a bias for small, high-predictive-accuracy networks. The new approach selects a subset of features that maximizes predictive accuracy prior to the network learning phase. We examine explicitly the effects of two aspects of the algorithm, feature selection and node ordering. Our approach generates networks that are computationally simpler to evaluate and display predictive accuracy comparable to that of Bayesian networks which model all attributes.


PLOS ONE | 2009

Codon Size Reduction as the Origin of the Triplet Genetic Code

Pavel V. Baranov; Maxime Venin; Gregory M. Provan

The genetic code appears to be optimized in its robustness to missense errors and frameshift errors. In addition, the genetic code is near-optimal in terms of its ability to carry information in addition to the sequences of encoded proteins. As evolution has no foresight, optimality of the modern genetic code suggests that it evolved from less optimal code variants. The length of codons in the genetic code is also optimal, as three is the minimal nucleotide combination that can encode the twenty standard amino acids. The apparent impossibility of transitions between codon sizes in a discontinuous manner during evolution has resulted in an unbending view that the genetic code was always triplet. Yet, recent experimental evidence on quadruplet decoding, as well as the discovery of organisms with ambiguous and dual decoding, suggest that the possibility of the evolution of triplet decoding from living systems with non-triplet decoding merits reconsideration and further exploration. To explore this possibility we designed a mathematical model of the evolution of primitive digital coding systems which can decode nucleotide sequences into protein sequences. These coding systems can evolve their nucleotide sequences via genetic events of Darwinian evolution, such as point-mutations. The replication rates of such coding systems depend on the accuracy of the generated protein sequences. Computer simulations based on our model show that decoding systems with codons of length greater than three spontaneously evolve into predominantly triplet decoding systems. Our findings suggest a plausible scenario for the evolution of the triplet genetic code in a continuous manner. This scenario suggests an explanation of how protein synthesis could be accomplished by means of long RNA-RNA interactions prior to the emergence of the complex decoding machinery, such as the ribosome, that is required for stabilization and discrimination of otherwise weak triplet codon-anticodon interactions.


uncertainty in artificial intelligence | 1994

An experimental comparison of numerical and qualitative probabilistic reasoning

Max Henrion; Gregory M. Provan; Brendan Del Favero; Gillian D Sanders

Qualitative and infinitesimal probability schemes are consistent with the axioms of probability theory, but avoid the need for precise numerical probabilities. Using qualitative probabilities could substantially reduce the effort for knowledge engineering and improve the robustness of results. We examine experimentally how well infinitesimal probabilities (the kappa-calculus of Goldszmidt and Pearl) perform a diagnostic task -- troubleshooting a car that will not start -- by comparison with a conventional numerical belief network. We found the infinitesimal scheme to be as good as the numerical scheme in identifying the true fault. The performance of the infinitesimal scheme worsens significantly for prior fault probabilities greater than 0.03. These results suggest that infinitesimal probability methods may be of substantial practical value for machine diagnosis with small prior fault probabilities.


International Journal of Approximate Reasoning | 1992

The validity of Dempster-Shafer belief functions

Gregory M. Provan

Abstract This reply to papers by Pearl and Shafer focuses on two issues underlying the debate on the validity of using Dempster-Shafer theory, namely the requirement of a process-independent semantics and the a priori need for multiple uncertainty calculi. Pearl shows deficiencies of Dempster-Shafer theory in dealing with several instances of commonsense reasoning in a process-independent manner. Although this argument is correct under the assumptions stated, it is weakened somewhat by introducing questions of whether a process-independent semantics is always necessary or desirable. Another issue underlying both papers, whether multiple uncertainty representations are necessary, is also discussed. Shafer claims that multiple uncertainty representations are necessary. He presents a goal of developing all uncertainty representations in parallel and defining domains in which each representation is best suited. In contrast, Pearl implicitly claims that probability theory alone is necessary, unless the use of another representation (such as Dempster-Shafer theory) is shown to be clearly advantageous. These two perspectives lead to different approaches to defining the form of uncertainty best modeled by Dempster-Shafer theory or any other uncertainty calculus.

Collaboration


Dive into the Gregory M. Provan's collaboration.

Top Co-Authors

Avatar

Alexander Feldman

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Murphy

University College Cork

View shared research outputs
Top Co-Authors

Avatar

Ji Ma

Tyndall National Institute

View shared research outputs
Top Co-Authors

Avatar

Michael Hayes

Tyndall National Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Wang

University College Cork

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arjan J. C. van Gemund

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge