Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dick den Hertog is active.

Publication


Featured researches published by Dick den Hertog.


IEEE Transactions on Evolutionary Computation | 2009

Order of Nonlinearity as a Complexity Measure for Models Generated by Symbolic Regression via Pareto Genetic Programming

Ekaterina Vladislavleva; Guido Smits; Dick den Hertog

This paper presents a novel approach to generate data-driven regression models that not only give reliable prediction of the observed data but also have smoother response surfaces and extra generalization capabilities with respect to extrapolation. These models are obtained as solutions of a genetic programming (GP) process, where selection is guided by a tradeoff between two competing objectives - numerical accuracy and the order of nonlinearity. The latter is a novel complexity measure that adopts the notion of the minimal degree of the best-fit polynomial, approximating an analytical function with a certain precision. Using nine regression problems, this paper presents and illustrates two different strategies for the use of the order of nonlinearity in symbolic regression via GP. The combination of optimization of the order of nonlinearity together with the numerical accuracy strongly outperforms ldquoconventionalrdquo optimization of a size-related expressional complexity and the accuracy with respect to extrapolative capabilities of solutions on all nine test problems. In addition to exploiting the new complexity measure, this paper also introduces a novel heuristic of alternating several optimization objectives in a 2-D optimization framework. Alternating the objectives at each generation in such a way allows us to exploit the effectiveness of 2-D optimization when more than two objectives are of interest (in this paper, these are accuracy, expressional complexity, and the order of nonlinearity). Results of the experiments on all test problems suggest that alternating the order of nonlinearity of GP individuals with their structural complexity produces solutions that are both compact and have smoother response surfaces, and, hence, contributes to better interpretability and understanding.


Management Science | 2013

Robust Solutions of Optimization Problems Affected by Uncertain Probabilities

Aharon Ben-Tal; Dick den Hertog; Anja De Waegenaere; Bertrand Melenberg; Gijs Rennen

In this paper we focus on robust linear optimization problems with uncertainty regions defined by φ-divergences for example, chi-squared, Hellinger, Kullback--Leibler. We show how uncertainty regions based on φ-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization problems in inventory control or finance that involve terms containing moments of random variables, expected utility, etc. We show that the robust counterpart of a linear optimization problem with φ-divergence uncertainty is tractable for most of the choices of φ typically considered in the literature. We extend the results to problems that are nonlinear in the optimization variables. Several applications, including an asset pricing example and a numerical multi-item newsvendor example, illustrate the relevance of the proposed approach. This paper was accepted by Gerard P. Cachon, optimization.


Journal of the Operational Research Society | 2006

The Correct Kriging Variance Estimated by Bootstrapping

Dick den Hertog; Jack P. C. Kleijnen; Alex Y. D. Siem

The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments. This paper proves that this formula is wrong. Furthermore, it shows that the formula underestimates the Kriging variance in expectation. The paper develops parametric bootstrapping to estimate the Kriging variance. The new method is tested on several artificial examples and a real-life case study. These results demonstrate that the classic formula underestimates the true Kriging variance.


Mathematical Programming | 2015

Deriving robust counterparts of nonlinear uncertain inequalities

Aharon Ben-Tal; Dick den Hertog; Jean-Philippe Vial

In this paper we provide a systematic way to construct the robust counterpart of a nonlinear uncertain inequality that is concave in the uncertain parameters. We use convex analysis (support functions, conjugate functions, Fenchel duality) and conic duality in order to convert the robust counterpart into an explicit and computationally tractable set of constraints. It turns out that to do so one has to calculate the support function of the uncertainty set and the concave conjugate of the nonlinear constraint function. Conveniently, these two computations are completely independent. This approach has several advantages. First, it provides an easy structured way to construct the robust counterpart both for linear and nonlinear inequalities. Second, it shows that for new classes of uncertainty regions and for new classes of nonlinear optimization problems tractable counterparts can be derived. We also study some cases where the inequality is nonconcave in the uncertain parameters.


Omega-international Journal of Management Science | 2015

A practical guide to robust optimization

Bram L. Gorissen; Ihsan Yanikoglu; Dick den Hertog

Robust optimization is a young and active research field that has been mainly developed in the last 15 years. Robust optimization is very useful for practice, since it is tailored to the information at hand, and it leads to computationally tractable formulations. It is therefore remarkable that real-life applications of robust optimization are still lagging behind; there is much more potential for real-life applications than has been exploited hitherto. The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a brief introduction to robust optimization, and also describe important do׳s and don׳ts for using it in practice. We use many small examples to illustrate our discussions.


Physics in Medicine and Biology | 2006

Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

Aswin L. Hoffmann; Alex Y. D. Siem; Dick den Hertog; Johannes H.A.M. Kaanders; Henk Huizenga

In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.


Technometrics | 2003

Constrained Maximin Designs for Computer Experiments

Erwin Stinstra; Dick den Hertog; Peter Stehouwer; Arjen Vestjens

Finding a design scheme for expensive computer experiments in arbitrary design regions is often a difficult task. A well-known criterion that is often used for computer experiments is the minimal (Euclidean) distance between any pair of design sites, which should be maximal. In this article we describe methods to create such a constrained maximin simulation scheme. We give some numerical results for both theoretical and practical examples.


Physics in Medicine and Biology | 2008

Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

Aswin L. Hoffmann; Dick den Hertog; Alex Y. D. Siem; Johannes H.A.M. Kaanders; Henk Huizenga

Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.


European Journal of Operational Research | 2008

Robust Optimization Using Computer Experiments

Erwin Stinstra; Dick den Hertog

During metamodel-based optimization three types of implicit errors are typically made.The first error is the simulation-model error, which is defined by the difference between reality and the computer model.The second error is the metamodel error, which is defined by the difference between the computer model and the metamodel.The third is the implementation error.This paper presents new ideas on how to cope with these errors during optimization, in such a way that the final solution is robust with respect to these errors.We apply the robust counterpart theory of Ben-Tal and Nemirovsky to the most frequently used metamodels: linear regression and Kriging models.The methods proposed are applied to the design of two parts of the TV tube.The simulationmodel errors receive little attention in the literature, while in practice these errors may have a significant impact due to propagation of such errors.


European Journal of Operational Research | 2005

Constrained optimization involving expensive function evaluations: A sequential approach

R.C.M. Brekelmans; Lonneke Driessen; Herbert Hamers; Dick den Hertog

This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).

Collaboration


Dive into the Dick den Hertog's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aharon Ben-Tal

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Roos

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge