Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Siegmund is active.

Publication


Featured researches published by Florian Siegmund.


winter simulation conference | 2012

Reference point-based evolutionary multi-objective optimization for industrial systems simulation

Florian Siegmund; Jacob Bernedixen; Leif Pehrsson; Amos H. C. Ng; Kalyanmoy Deb

In Multi-objective Optimization the goal is to present a set of Pareto-optimal solutions to the decision maker (DM). One out of these solutions is then chosen according to the DM preferences. Given that the DM has some general idea of what type of solution is preferred, a more efficient optimization could be run. This can be accomplished by letting the optimization algorithm make use of this preference information and guide the search towards better solutions that correspond to the preferences. One example for such kind of algorithms is the Reference point-based NSGA-II algorithm (R-NSGA-II), by which user-specified reference points can be used to guide the search in the objective space and the diversity of the focused Pareto-set can be controlled. In this paper, the applicability of the R-NSGA-II algorithm in solving industrial-scale simulation-based optimization problems is illustrated through a case study for the improvement of a production line.


congress on evolutionary computation | 2012

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

Evolutionary Multi-objective Optimization aims at finding a diverse set of Pareto-optimal solutions whereof the decision maker can choose the solution that fits best to her or his preferences. In case of limited time (of function evaluations) for optimization this preference information may be used to speed up the search by making the algorithm focus directly on interesting areas of the objective space. The R-NSGA-II algorithm [1] uses reference points to which the search is guided specified according to the preferences of the user. In this paper, we propose an extension to R-NSGA-II that limits the Pareto-fitness to speed up the search for a limited number of function calls. It avoids to automatically select all solutions of the first front of the candidate set into the next population. In this way non-preferred Pareto-optimal solutions are not considered thereby accelerating the search process. With focusing comes the necessity to maintain diversity. In R-NSGA-II this is achieved with the help of a clustering algorithm which keeps the found solutions above a minimum distance ε. In this paper, we propose a self-adaptive ε approach that autonomously provides the decision maker with a more diverse solution set if the found Pareto-set is situated further away from a reference point. Similarly, the approach also varies the diversity inside of the Pareto-set. This helps the decision maker to get a better overview of the available solutions and supports decisions about how to adapt the reference points.


international conference on evolutionary multi-criterion optimization | 2015

Hybrid Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

In Guided Evolutionary Multi-objective Optimization the goal is to find a diverse, but locally focused non-dominated front in a decision maker’s area of interest, as close as possible to the true Pareto-front. The optimization can focus its efforts towards the preferred area and achieve a better result [7, 9, 13, 17]. The modeled and simulated systems are often stochastic and a common method to handle the objective noise is Resampling. The given preference information allows to define better resampling strategies which further improve the optimization result. In this paper, resampling strategies are proposed that base the sampling allocation on multiple factors, and thereby combine multiple resampling strategies proposed by the authors in [15]. These factors are, for example, the Pareto-rank of a solution and its distance to the decision maker’s area of interest. The proposed hybrid Dynamic Resampling Strategy DR2 is evaluated on the Reference point-guided NSGA-II optimization algorithm (R-NSGA-II) [9].


congress on evolutionary computation | 2013

A comparative study of dynamic resampling strategies for guided Evolutionary Multi-objective Optimization

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

In Evolutionary Multi-objective Optimization many solutions have to be evaluated to provide the decision maker with a diverse choice of solutions along the Pareto-front, in particular for high-dimensional optimization problems. In Simulation-based Optimization the modeled systems are complex and require long simulation times. In addition the evaluated systems are often stochastic and reliable quality assessment of system configurations by resampling requires many simulation runs. As a countermeasure for the required high number of simulation runs caused by multiple optimization objectives the optimization can be focused on interesting parts of the Pareto-front, as it is done by the Reference point-guided NSGA-II algorithm (R-NSGA-II) [9]. The number of evaluations needed for the resampling of solutions can be reduced by intelligent resampling algorithms that allocate just as much sampling budget needed in different situations during the optimization run. In this paper we propose and compare resampling algorithms that support the R-NSGA-II algorithm on optimization problems with stochastic evaluation functions.


swarm evolutionary and memetic computing | 2014

R-HV: A Metric for Computing Hyper-volume for Reference Point Based EMOs

Kalyanmoy Deb; Florian Siegmund; Amos H. C. Ng

For evaluating performance of a multi-objective optimization for finding the entire efficient front, a number of metrics, such as hyper-volume, inverse generational distance, etc. exists. However, for evaluating an EMO algorithm for finding a subset of the efficient frontier, the existing metrics are inadequate. There does not exist many performance metrics for evaluating a partial preferred efficient set. In this paper, we suggest a metric which can be used for such purposes for both attainable and unattainable reference points. Results on a number of two-objective problems reveal its working principle and its importance in assessing different algorithms. The results are promising and encouraging for its further use.


european conference on applications of evolutionary computation | 2016

Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

In Simulation-based Evolutionary Multi-objective Optimization (EMO) the available time for optimization usually is limited. Since many real-world optimization problems are stochastic models, the optimization algorithm has to employ a noise compensation technique for the objective values. This article analyzes Dynamic Resampling algorithms for handling the objective noise. Dynamic Resampling improves the objective value accuracy by spending more time to evaluate the solutions multiple times, which tightens the optimization time limit even more. This circumstance can be used to design Dynamic Resampling algorithms with a better sampling allocation strategy that uses the time limit. In our previous work, we investigated Time-based Hybrid Resampling algorithms for Preference-based EMO. In this article, we extend our studies to general EMO which aims to find a converged and diverse set of alternative solutions along the whole Pareto-front of the problem. We focus on problems with an invariant noise level, i.e. a flat noise landscape.


congress on evolutionary computation | 2016

A ranking and selection strategy for preference-based evolutionary multi-objective optimization of variable-noise problems

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

In simulation-based Evolutionary Multi-objective Optimization the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space, for example with the R-NSGA-II algorithm [9], which uses a reference point specified by the decision maker. When stochastic systems are simulated, the uncertainty of the objective values might degrade the optimization performance. By sampling the solutions multiple times this uncertainty can be reduced. However, resampling methods reduce the overall number of evaluated solutions which potentially worsens the optimization result. In this article, a Dynamic Resampling strategy is proposed which identifies the solutions closest to the reference point which guides the population of the Evolutionary Algorithm. We apply a single-objective Ranking and Selection resampling algorithm in the selection step of R-NSGA-II, which considers the stochastic reference point distance and its variance to identify the best solutions. We propose and evaluate different ways to integrate the sampling allocation method into the Evolutionary Algorithm. On the one hand, the Dynamic Resampling algorithm is made adaptive to support the EA selection step, and it is customized to be used in the time-constrained optimization scenario. Furthermore, it is controlled by other resampling criteria, in the same way as other hybrid DR algorithms. On the other hand, R-NSGA-II is modified to rely more on the scalar reference point distance as fitness function. The results are evaluated on a benchmark problem with variable noise landscape.


international conference on evolutionary multi criterion optimization | 2017

A Comparative Study of Fast Adaptive Preference-Guided Evolutionary Multi-objective Optimization

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb

In Simulation-based Evolutionary Multi-objective Optimization, the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space with, for example, the Reference Point-based NSGA-II algorithm R-NSGA-IIi¾ź[4]. Since the Pareto-relation is the primary fitness function in R-NSGA-II, the algorithm focuses on exploring the objective space with high diversity. Only after the population has converged close to the Pareto-front does the influence of the reference point distance as secondary fitness criterion increase and the algorithm converges towards the preferred area on the Pareto-front. In this paper, we propose a set of extensions of R-NSGA-II which adaptively control the algorithm behavior, in order to converge faster towards the reference point. The adaption can be based on criteria such as elapsed optimization time or the reference point distance, or a combination thereof. In order to evaluate the performance of the adaptive extensions of R-NSGA-II, a performance metric for reference point-based EMO algorithms is used, which is based on the Hypervolume measure called the Focused Hypervolume metric [12]. It measures convergence and diversity of the population in the preferred area around the reference point. The results are evaluated on two benchmark problems of different complexity and a simplistic production line model.


multiple criteria decision making | 2015

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems

Florian Siegmund; Amos H. C. Ng; Kalyanmoy Deb


multiple criteria decision making | 2013

Adaptive Guided Evolutionary Multi-Objective Optimization

Florian Siegmund; Kalyanmoy Deb; Amos H. C. Ng

Collaboration


Dive into the Florian Siegmund's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalyanmoy Deb

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge