Alex Y. D. Siem
Tilburg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Y. D. Siem.
Journal of the Operational Research Society | 2006
Dick den Hertog; Jack P. C. Kleijnen; Alex Y. D. Siem
The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments. This paper proves that this formula is wrong. Furthermore, it shows that the formula underestimates the Kriging variance in expectation. The paper develops parametric bootstrapping to estimate the Kriging variance. The new method is tested on several artificial examples and a real-life case study. These results demonstrate that the classic formula underestimates the true Kriging variance.
Physics in Medicine and Biology | 2006
Aswin L. Hoffmann; Alex Y. D. Siem; Dick den Hertog; Johannes H.A.M. Kaanders; Henk Huizenga
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Physics in Medicine and Biology | 2008
Aswin L. Hoffmann; Dick den Hertog; Alex Y. D. Siem; Johannes H.A.M. Kaanders; Henk Huizenga
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
European Journal of Operational Research | 2008
Alex Y. D. Siem; Dick den Hertog; Aswin L. Hoffmann
In the literature, methods for the construction of piecewise linear upper and lower bounds for the approximation of univariate convex functions have been proposed. We study the effect of the use of increasing convex or increasing concave transformations on the approximation of univariate (convex) functions. In this paper, we show that these transformations can be used to construct upper and lower bounds for nonconvex functions. Moreover, we show that by using such transformations of the input variable or the output variable, we obtain tighter upper and lower bounds for the approximation of convex functions than without these approximations. We show that these transformations can be applied to the approximation of a (convex) Pareto curve that is associated with a (convex) bi-objective optimization problem.
Informs Journal on Computing | 2011
Alex Y. D. Siem; Dick den Hertog; Aswin L. Hoffmann
In this paper, piecewise-linear upper and lower bounds for univariate convex functions are derived that are only based on function value information. These upper and lower bounds can be used to approximate univariate convex functions. Furthermore, new sandwich algorithms are proposed that iteratively add new input data points in a systematic way until a desired accuracy of the approximation is obtained. We show that our new algorithms that use only function value evaluations converge quadratically under certain conditions on the derivatives. Under other conditions, linear convergence can be shown. Some numerical examples that illustrate the usefulness of the algorithm, including a strategic investment model, are given.
Archive | 2007
Alex Y. D. Siem; Dick den Hertog
In the field of the Design and Analysis of Computer Experiments (DACE) meta-models are used to approximate time-consuming simulations. These simulations often contain simulation-model errors in the output variables. In the construction of meta-models, these errors are often ignored. Simulation-model errors may be magnified by the meta-model. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulation-model errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Six-hump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models.
international conference on computational science and its applications | 2006
Alex Y. D. Siem; Dick den Hertog; Aswin L. Hoffmann
The main contents of this paper is two-fold. First, we present a method to approximate multivariate convex functions by piecewise linear upper and lower bounds. We consider a method that is based on function evaluations only. However, to use this method, the data have to be convex. Unfortunately, even if the underlying function is convex, this is not always the case due to (numerical) errors. Therefore, secondly, we present a multivariate data-smoothing method that smooths nonconvex data. We consider both the case that we have only function evaluations and the case that we also have derivative information. Furthermore, we show that our methods are polynomial time methods. We illustrate this methodology by applying it to some examples.
Lecture Notes in Computer Science | 2006
Alex Y. D. Siem; Dick den Hertog; Aswin L. Hoffmann
Journal of Economic Behavior and Organization | 2011
Alex Y. D. Siem; Dick den Hertog; A.L. Hoffmann
Structural and Multidisciplinary Optimization | 2008
Alex Y. D. Siem; E. de Klerk; Dick den Hertog