Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philipp Limbourg is active.

Publication


Featured researches published by Philipp Limbourg.


international conference on evolutionary multi criterion optimization | 2005

Multi-objective optimization of problems with epistemic uncertainty

Philipp Limbourg

Multi-objective evolutionary algorithms (MOEAs) have proven to be a powerful tool for global optimization purposes of deterministic problem functions. Yet, in many real-world problems, uncertainty about the correctness of the system model and environmental factors does not allow to determine clear objective values. Stochastic sampling as applied in noisy EAs neglects that this so-called epistemic uncertainty is not an inherent property of the system and cannot be reduced by sampling methods. Therefore, some extensions for MOEAs to handle epistemic uncertainty in objective functions are proposed. The extensions are generic and applicable to most common MOEAs. A density measure for uncertain objectives is proposed to maintain diversity in the nondominated set. The approach is demonstrated to the reliability optimization problem, where uncertain component failure rates are usual and exhaustive tests are often not possible due to time and budget reasons.


Reliability Engineering & System Safety | 2010

Uncertainty analysis using evidence theory – confronting level-1 and level-2 approaches with data availability and computational constraints

Philipp Limbourg; Etienne de Rocquigny

Abstract Dempster–Shafer Theory of Evidence (DST), as an alternative or complementary approach to the representation of uncertainty, is gradually being explored with complex practical applications beyond purely algebraic examples. This paper reviews literature documenting such complex applications and studies its applicability from the point of view of the nature and amount of data that is typically available in industrial risk analysis: medium-size frequential observations for aleatory components, small noised datasets for model parameters and expert judgment for other components. On the basis of a simple flood model encoding typical risk analysis features, different approaches to quantify uncertainty in DST are reviewed and benchmarked in that perspective: (i) combining all sources of uncertainty under a single-level DST model; (ii) separating aleatory and epistemic uncertainties, respectively, modeled with a first probabilistic layer and a second one under DST. Methods for handling data in probabilistic studies such as Kolmogorov–Smirnov tests and quantile–quantile plots are transferred to the domain of DST. We illustrate how data availability guides the choice of the settings and how results and sensitivity analyses can be interpreted in the domain of DST, concluding with recommendations for industrial practice.


Reliability Engineering & System Safety | 2008

Multi-objective optimization of generalized reliability design problems using feature models—A concept for early design stages

Philipp Limbourg; Hans-Dieter Kochs

Reliability optimization problems such as the redundancy allocation problem (RAP) have been of considerable interest in the past. However, due to the restrictions of the design space formulation, they may not be applicable in all practical design problems. A method with high modelling freedom for rapid design screening is desirable, especially in early design stages. This work presents a novel approach to reliability optimization. Feature modelling, a specification method originating from software engineering, is applied for the fast specification and enumeration of complex design spaces. It is shown how feature models can not only describe arbitrary RAPs but also much more complex design problems. The design screening is accomplished by a multi-objective evolutionary algorithm for probabilistic objectives. Comparing averages or medians may hide the true characteristics of this distributions. Therefore the algorithm uses solely the probability of a system dominating another to achieve the Pareto optimal set. We illustrate the approach by specifying a RAP and a more complex design space and screening them with the evolutionary algorithm.


Archive | 2008

Summary, Conclusions and Outlook

Philipp Limbourg

To meet the demand for integrated dependability assessment issued by mechatronic projects, dependability predictions especially for early design stages are to be developed and refined. These methods need to consider the various sources which introduce a large amount of uncertainty into the prediction. As a consequence, predictions must include uncertainty models. Beside probability theory, several other frameworks for representing and propagating uncertainty exist. As it is an open discussion whether or not the probabilistic representation is preferable to model epistemic uncertainty, other representations need to be evaluated.


Reliability Engineering & System Safety | 2010

Accelerated uncertainty propagation in two-level probabilistic studies under monotony

Philipp Limbourg; Etienne de Rocquigny; Guennadi Andrianov

Abstract Double-level probabilistic uncertainty models that separate aleatory and epistemic components enjoy significant interest in risk assessment. But the expensive computational costs associated with calculations of rare failure probabilities are still a large obstacle in practice. Computing accurately a risk lower than 10−3 with 95% epistemic confidence usually requires 107–108 runs in a brute-force double Monte Carlo. For single-level probabilistic studies, FORM (First Order Reliability Analysis) is a classical recipe allowing fast approximation of failure probabilities while MRM (Monotonous Reliability Method) recently proved an attractive robust alternative under monotony. This paper extends these methods to double-level probabilistic models through two novel algorithms designed to compute a set of failure probabilities or an aleatory risk level with an epistemic confidence quantile. The first, L2-FORM (level-2 FORM), allows a rapid approximation of the failure probabilities through a combination of FORM with new ideas to use similarity between computations. L2-MRM (level-2 MRM), a quadrature approach, provides 100%-guaranteed error bounds on the results. Experiments on three flood prediction problems showed that both algorithms approximate a set of 500 failure probabilities of 10−3–10−2 or derived 95% epistemic quantiles with a total of only 500–1000 function evaluations, outperforming importance sampling, iterative FORM and regression splines metamodels.


quantitative evaluation of systems | 2009

COBAREA: The COpula-BAsed REliability and Availability Modeling Environment

Max Walter; Sebastian Esch; Philipp Limbourg

As an alternative to fault trees being able to deal with inter-component dependencies, we present a tool based on copulas. Copulas are a way of specifying joint distributions if only the marginal probabilities are known. In terms of system reliability, this can be interpreted as inferring the system state vector probability from the component state probabilities. What makes copulas a valuable modeling method for large reliability models isthe separation of the component distributions (the marginals) and the dependencies. Therefore, copulas can be used with arbitrary fault tree evaluation algorithms.


Archive | 2008

Representation and Propagation of Uncertainty Using the Dempster-Shafer Theory of Evidence

Philipp Limbourg

This chapter collects fragments from current research to form a coherent framework for uncertainty representation and propagation in the Dempster-Shafer theory of evidence (DST). While there are some aspects of DST that have been brought into practice, many other methods important in probabilistic modeling, such as uncertainty measures and sensitivity analyses were only proposed but not transferred to real-world problems. Especially in the field of dependability prediction, the application of DST is still in its fledgling stages. Hence, the purpose of this section is not only to describe the state of the art but to present and introduce new tools necessary for a DST-based uncertainty analysis in a dependability scenario. All methods described are implemented in an open source toolbox (Imprecise Probability Toolbox for MATLAB, [103]).


Archive | 2008

Predicting Dependability Characteristics by Similarity Estimates – A Regression Approach

Philipp Limbourg

The presumption of similarity estimation is that companies do not solely create independent, completely innovative and new products. Product families with a number of variants are much more common. If products are re-used, dependability knowledge should be re-used as well. If already acquired data is efficiently included into dependability predictions of new products, then this might lead to a drastically improved accuracy/cost ratio for dependability prediction. These thoughts spurred the development of the similarity estimation process described.


Archive | 2008

Design Space Specification of Dependability Optimization Problems Using Feature Models

Philipp Limbourg

Predicting EDS dependability is not an end in itself. One of the main reasons for quantifying the dependability of a system is the need for some comparable values to find an appropriate system design. Dependability-driven system optimization has been for long time a topic of considerable interest. Various objectives such as the dependability constituents reliability and safety, cost, weight and prediction uncertainty were chosen to rate the systems. Many search heuristics such as evolutionary algorithms [30], tabu search [96] and physical programming [161] have been applied to screen the design space for interesting systems regarding objectives as dependability and costs (a more detailed overview is given in section 5.1).


Archive | 2008

Evolutionary Multi-objective Optimization of Imprecise Probabilistic Models

Philipp Limbourg

Multi-objective evolutionary algorithms (MOEAs) have become increasingly popular in the recent past to discover optimal or near optimal solutions for design problems with a high complexity and several objectives [26]. Hence, MOEAs seem to be an interesting option for dependability optimization in an EDS. However most MOEAs are designed for deterministic problems, requiring that an evaluation of the objectives for a solution results in a deterministic objective vector without uncertainty.

Collaboration


Dive into the Philipp Limbourg's collaboration.

Top Co-Authors

Avatar

Hans-Dieter Kochs

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Echtle

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ekaterina Auer

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Felix Salfner

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Gabor Rebner

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

HansüDieter Kochs

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wolfram Luther

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge