Tim Bedford
University of Strathclyde
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Bedford.
Annals of Mathematics and Artificial Intelligence | 2001
Tim Bedford; Roger M. Cooke
A vine is a new graphical model for dependent random variables. Vines generalize the Markov trees often used in modeling multivariate distributions. They differ from Markov trees and Bayesian belief nets in that the concept of conditional independence is weakened to allow for various forms of conditional dependence. A general formula for the density of a vine dependent distribution is derived. This generalizes the well-known density formula for belief nets based on the decomposition of belief nets into cliques. Furthermore, the formula allows a simple proof of the Information Decomposition Theorem for a regular vine. The problem of (conditional) sampling is discussed, and Gibbs sampling is proposed to carry out sampling from conditional vine dependent distributions. The so-called ‘canonical vines’ built on highest degree trees offer the most efficient structure for Gibbs sampling.
Journal of Risk Research | 2005
Tim Bedford; Elizabeth Atherton
Current guidance in the UK and elsewhere indicate upper and target risk limits for the operation of nuclear plant in terms of individual risk per annum. ‘As low as reasonably practicable’ (ALARP) arguments are used to justify the acceptance or rejection of policies that lead to risk changes between these limits. The suitability of cost‐benefit analysis (CBA) and multiattribute utility theory (MAUT) are assessed for performing ALARP (‘as low as reasonably possible’) assessments, in particular within the nuclear industry. Four problems stand out in current CBA applications to ALARP, concerning the determination of prices of safety gains or detriments, the valuation of group and individual risk, calculations using ‘disproportionality’, and the use of discounting to trade‐off risks through time. This last point has received less attention in the past but is important because of the growing interest in risk‐informed regulation in which policies extend over several timeframes and distribute the risk unevenly over these, or in policies that lead to a nonuniform risk within a single timeframe (such as maintenance policies). The problems associated with giving quantitative support to such decisions are discussed. It is argued that multiattribute utility methods (MAUT) provide an alternative methodology to CBA which enable the four problems described above to be addressed in a more satisfactory way. Through sensitivity analysis MAUT can address the perceptions of all stakeholder groups, facilitating constructive discussion and elucidating the key points of disagreement. It is also argued that by being explicitly subjective it provides an open, auditable and clear analysis in contrast to the illusory objectivity of CBA. CBA seeks to justify a decision by using a common basis for weights (prices), while MAUT recognizes that different parties may want to give different valuations. It then allows the analyst to explore the ways in which different parties might (or might not) come to the same conclusion even when weighting items differently.
Statistical Science | 2006
Tim Bedford; John Quigley; Lesley Walls
This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.
IEEE Transactions on Reliability | 2002
Roger M. Cooke; Tim Bedford
Concepts and methods used in designing modem RDB (reliability data banks) are reviewed. Taxonomies for failures and maintenance actions are not fully standardized and can cause confusion. Section II summarizes current usages. The structure of the raw data is identified as: component socket histories with competing risk. Successive sections discuss data-structure and operations on data, data analysis with and without competing risk, and the analysis of uncertainty. In the context of reliability data, the assumption that competing risks are s-independent is frequently unwarranted. Models for dependent competing risk are discussed, and illustrated with examples from pressure-relief-valve data at two Swedish nuclear-power stations.
Management Science | 2005
Bernd Kraan; Tim Bedford
Expert judgment is frequently used to assess parameter values of quantitative management science models, particularly in decision-making contexts. Experts can, however, only be expected to assess observable quantities, not abstract model parameters. This means that we need a method for translating expert assessed uncertainties on model outputs into uncertainties on model parameter values. This process is calledprobabilistic inversion. The probability distribution on model parameters obtained in this way can be used in a variety of ways, but in particular in an uncertainty analysis or as a Bayes prior. This paper discusses computational algorithms that have proven successful in various projects and gives examples from environmental modelling and banking. Those algorithms are given a theoretical basis by adopting a minimum information approach to modelling partial information. The role of minimum information is two-fold: It enables us to resolve the problem of nonuniqueness of distributions given the information we have, and it provides numerical stability to the algorithm by guaranteeing convergence properties.
Ergodic Theory and Dynamical Systems | 1997
Tim Bedford; Albert M. Fisher
Given a
Reliability Engineering & System Safety | 2007
John Quigley; Tim Bedford; Lesley Walls
{\cal C}^{1+\gamma}
Archive | 1991
Tim Bedford
hyperbolic Cantor set
Advances in Applied Probability | 2004
Tim Bedford; Bo Henry Lindqvist
C
Reliability Engineering & System Safety | 2013
Athena Zitrou; Tim Bedford; Alireza Daneshkhah
, we study the sequence