Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alireza Daneshkhah is active.

Publication


Featured researches published by Alireza Daneshkhah.


Reliability Engineering & System Safety | 2013

Robustness of maintenance decisions: Uncertainty modelling and value of information

Athena Zitrou; Tim Bedford; Alireza Daneshkhah

In this paper we show how sensitivity analysis for a maintenance optimisation problem can be undertaken by using the concept of expected value of perfect information (EVPI). This concept is important in a decision-theoretic context such as the maintenance problem, as it allows us to explore the effect of parameter uncertainty on the cost and the resulting recommendations. To reduce the computational effort required for the calculation of EVPIs, we have used Gaussian process (GP) emulators to approximate the cost rate model. Results from the analysis allow us to identify the most important parameters in terms of the benefit of ’learning’ by focussing on the partial expected value of perfect information for a parameter. The analysis determines the optimal solution and the expected related cost when the parameters are unknown and partially known. This type of analysis can be used to ensure that both maintenance calculations and resulting recommendations are sufficiently robust.


Risk Analysis | 2016

Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas.

Tim Bedford; Alireza Daneshkhah; Kevin J. Wilson

Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets.


Communications in Statistics - Simulation and Computation | 2016

Approximation Multivariate Distribution with Pair Copula Using the Orthonormal Polynomial and Legendre Multiwavelets Basis Functions

Alireza Daneshkhah; Golamali Parham; Omid Chatrabgoun; M. Jokar

We concentrate on constructing higher dimensional distributions using a fast growing graphical model called Vine/ pair-copula model which has been introduced and developed by Joe, Cooke, Bedford, Kurowica, Daneshkhah, and others. They first construct a n-dimensional copula density by stacking together n(n − 1)/2 bivariate copula density, and they then approximate arbitrarily well these bivariate copulas and the corresponding multivariate distribution using a semi-parametric method. One constructive approach involves the use of minimum information copulas that can be specified to any required degree of precision based on the available data (or possibly based on the experts’ judgments). By using this method, one is able to use a fixed finite dimensional family of copulas to be employed in terms of a vine construction, with the promise of a uniform level of approximation. The basic idea behind this method is to use a two-dimensional ordinary polynomial series to approximate any log-density of a bivariate copula function by truncating the series at an appropriate point. We make this approximation method more accurate and computationally faster by using the orthonormal polynomial and Legendre multiwavelets (LMW) series as the basis functions. We show the derived approximations are more precise and computationally faster with better properties than the one proposed previous method in the literature. We then apply our method to modeling a dataset of Norwegian financial data that was previously analyzed in the series of articles, and finally compare our results by them. At the end, we present a method to simulate from the approximated models, and validate our approximation using the simulation results to recover the same dependency structure of the original data.


International Journal of Approximate Reasoning | 2010

On the robustness of Bayesian networks to learning from non-conjugate sampling

Jim Q. Smith; Alireza Daneshkhah

Recent results concerning the instability of Bayes Factor search over Bayesian Networks (BNs) lead us to ask whether learning the parameters of a selected BN might also depend heavily on the often rather arbitrary choice of prior density. Robustness of inferences to misspecification of the prior density would at least ensure that a selected candidate model would give similar predictions of future data points given somewhat different priors and a given large training data set. In this paper we derive new explicit total variation bounds on the calculated posterior density as the function of the closeness of the genuine prior to the approximating one used and certain summary statistics of the calculated posterior density. We show that the approximating posterior density often converges to the genuine one as the number of sample point increases and our bounds allow us to identify when the posterior approximation might not. To prove our general results we needed to develop a new family of distance measures called local DeRobertis distances. These provide coarse non-parametric neighbourhoods and allowed us to derive elegant explicit posterior bounds in total variation. The bounds can be routinely calculated for BNs even when the sample has systematically missing observations and no conjugate analyses are possible.


Reliability Engineering & System Safety | 2014

Assessing parameter uncertainty on coupled models using minimum information methods

Tim Bedford; Kevin J. Wilson; Alireza Daneshkhah

Probabilistic inversion is used to take expert uncertainty assessments about observable model outputs and build from them a distribution on the model parameters that captures the uncertainty expressed by the experts. In this paper we look at ways to use minimum information methods to do this, focussing in particular on the problem of ensuring consistency between expert assessments about differing variables, either as outputs from a single model or potentially as outputs along a chain of models. The paper shows how such a problem can be structured and then illustrates the method with two examples; one involving failure rates of equipment in series systems and the other atmospheric dispersion and deposition.


Reliability Engineering & System Safety | 2017

Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

Alireza Daneshkhah; Nigel G. Stocks; Paul Jeffrey

Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall outputs variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational learning.


Journal of Computational Science | 2017

Approximating non-Gaussian Bayesian networks using minimum information vine model with applications in financial modelling

Omid Chatrabgoun; Amin Hosseinian-Far; Victor Chang; Nigel G. Stocks; Alireza Daneshkhah

Many financial modeling applications require to jointly model multiple uncertain quantities to present more accurate, near future probabilistic predictions. Informed decision making would certainly benefit from such predictions. Bayesian networks (BNs) and copulas are widely used for modeling numerous uncertain scenarios. Copulas, in particular, have attracted more interest due to their nice property of approximating the probability distribution of the data with heavy tail. Heavy tail data is frequently observed in financial applications. The standard multivariate copula suffer from serious limitations which made them unsuitable for modeling the financial data. An alternative copula model called the pair-copula construction (PCC) model is more flexible and efficient for modeling the complex dependence of financial data. The only restriction of PCC model is the challenge of selecting the best model structure. This issue can be tackled by capturing conditional independence using the Bayesian network PCC (BN-PCC). The flexible structure of this model can be derived from conditional independences statements learned from data. Additionally, the difficulty of computing conditional distributions in graphical models for non-Gaussian distributions can be eased using pair-copulas. In this paper, we extend this approach further using the minimum information vine model which results in a more flexible and efficient approach in understanding the complex dependence between multiple variables with heavy tail dependence and asymmetric features which appear widely in the financial applications.


Archive | 2017

Sustainable Maintenance Strategy Under Uncertainty in the Lifetime Distribution of Deteriorating Assets

Alireza Daneshkhah; Amin Hosseinian-Far; Omid Chatrabgoun

In the life-cycle management of systems under continuous deterioration, studying the sensitivity analysis of the optimised preventive maintenance decisions with respect to the changes in the model parameters is of a great importance. Since the calculations of the mean cost rates considered in the preventive maintenance policies are not sufficiently robust, the corresponding maintenance model can generate outcomes that are not robust and this would subsequently require interventions that are costly. This chapter presents a computationally efficient decision-theoretic sensitivity analysis for a maintenance optimisation problem for systems/structures/assets subject to measurable deterioration using the Partial Expected Value of Perfect Information (PEVPI) concept. Furthermore, this sensitivity analysis approach provides a framework to quantify the benefits of the proposed maintenance/replacement strategies or inspection schedules in terms of their expected costs and in light of accumulated information about the model parameters and aspects of the system, such as the ageing process. In this paper, we consider random variable model and stochastic Gamma process model as two well-known probabilistic models to present the uncertainty associated with the asset deterioration. We illustrate the use of PEVPI to perform sensitivity analysis on a maintenance optimisation problem by using two standard preventive maintenance policies, namely age-based and condition-based maintenance policies. The optimal strategy of the former policy is the time of replacement or repair and the optimal strategies of the later policy are the inspection time and the preventive maintenance ratio. These optimal strategies are determined by minimising the corresponding expected cost rates for the given deterioration models’ parameters, total cost and replacement or repair cost. The robust optimised strategies to the changes of the models’ parameters can be determined by evaluating PEVPI’s which involves the computation of multi-dimensional integrals and is often computationally demanding, and conventional numerical integration or Monte Carlo simulation techniques would not be helpful. To overcome this computational difficulty, we approximate the PEVPI using Gaussian process emulators.


Hydrological Sciences Journal-journal Des Sciences Hydrologiques | 2016

Scale impacts on spatial variability in reference evapotranspiration

Tim Hess; A. Daccache; Alireza Daneshkhah; Jerry W. Knox

ABSTRACT Evapotranspiration (ET) is one of the most important components in the hydrological cycle, and a key variable in hydrological modelling and water resources management. However, understanding the impacts of spatial variability in ET and the appropriate scale at which ET data should be incorporated into hydrological models, particularly at the regional scale, is often overlooked. This is in contrast to dealing with the spatial variability in rainfall data where existing guidance is widely available. This paper assesses the impacts of scale on the estimation of reference ET (ETo) by comparing data from individual weather stations against values derived from three national datasets, at varying resolutions. These include the UK Climate Impacts Programme 50 km climatology (UKCP50), the UK Met Office 5 km climatology (UKMO5) and the regional values published in the Agricultural Climate of England and Wales (ACEW). The national datasets were compared against the individual weather station data and the UKMO5 was shown to provide the best estimate of ETo at a given site. The potential impacts on catchment modelling were then considered by mapping variance in ETo to show how geographical location and catchment size can have a major impact, with small lowland catchments having much higher variance than those with much larger areas or in the uplands. Some important implications for catchment hydrological modelling are highlighted. Editor D. Koutsoyiannis; Associate editor L. Ruiz


Archive | 2004

Hypercausality, Randomisation, and Local and Global Independence

Alireza Daneshkhah; Jim Q. Smith

In this paper we define the multicausal essential graph. Such graphical model demands further properties an equivalence class of Bayesian networks (BNs). In particular, each BN in an equivalence class is assumed to be causal in a stronger version of the manipulation causality of Spirtes et al (1993). In practice, the probabilities of any causal Bayesian network (CBN) will usually need to be estimated. To estimate the conditional probabilities in BNs it is common to assume local and global independence. The first result we prove in this paper is that if the BN is believed to be causal in a sufficiently strong sense i.e., is hypercausal, then it is necessary to make the assumption of local and global prior independence. Our second theorem develops this result. We give a characterisation of prior distributions sympathetic to causal hypotheses on all BNs in the equivalence class defined by an essential graph. We show that such strongly causally compatible priors satisfy a generalisation of the Geiger and Heckerman (1997) condition. In a special case when the essential graph is undirected, this family of prior distributions reduces to the Hyper-Dirichlet family, originally introduced by Dawid and Lauritzen (1993) as a prior family for decomposable graphical models.

Collaboration


Dive into the Alireza Daneshkhah's collaboration.

Top Co-Authors

Avatar

Tim Bedford

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Quigley

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Kevin J. Wilson

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Lesley Walls

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gavin Hardman

University of Strathclyde

View shared research outputs
Researchain Logo
Decentralizing Knowledge