Victor Picheny
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Victor Picheny.
Journal of Mechanical Design | 2010
Victor Picheny; David Ginsbourger; Olivier Roustant; Raphael T. Haftka; Nam-Ho Kim
This paper addresses the issue of designing experiments for a metamodel that needs to be accurate for a certain level of the response value. Such situation is encountered in particular in constrained optimization and reliability analysis. Here, we propose an iterative strategy to build designs of experiments, which is based on an explicit trade-off between reduction of global uncertainty and exploration of the regions of interest. The method is illustrated on several test-problems. It is shown that a substantial reduction of error can be achieved in the crucial regions, with reasonable loss on the global accuracy. The method is finally applied to a reliability analysis problem; it is found that the adaptive designs significantly outperform classical space-filling designs.
Technometrics | 2013
Victor Picheny; David Ginsbourger; Yann Richet; Gregory Caplin
This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.
AIAA Journal | 2010
Felipe A. C. Viana; Victor Picheny; Raphael T. Haftka
this work we use safety margins to conservatively compensate for fitting errors associated with surrogates. We propose the use of cross validation for estimating the required safety margin for a desired level of conservativeness (percentage of safe predictions). The approach was tested on three algebraic examples for two basic surrogates: namely, kriging and polynomial response surface. For these examples we found that cross validation is effective for selecting the safety margin. We also applied the approach to the probabilistic design optimization of a composite laminate. This design under uncertainty example showed that the approach can be successfully used in engineering applications.
49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference <br> 16th AIAA/ASME/AHS Adaptive Structures Conference<br> 10t | 2008
Victor Picheny; Nam-Ho Kim; Raphael T. Haftka; Nestor V. Queipo
[Abstract] Conservative prediction refers to calcu lations or approximations that tend to estimate safely the response of a system. The aim o f this study is to explore and compare the alternatives to produce conservative predictions wh en using surrogate models. We propose four different approaches: empirical approaches (Safety factors and margins), biased fitting approaches, that constrain the surrogate to be on o ne side of the training points, statisticbased approaches that use the prediction errors of the surrogates, and indicator kriging, that provides probabilities to exceed some cut-off value s. Since the more conservative estimators tend to overestimate the true values, the problem c an be considered as a multi-objective optimization, and results are presented in the form of Pareto fronts: accuracy vs. conservativeness. The best approach is the one that provide the best chance to be on the conservative side with the least impact on accuracy . Two surrogate models, polynomial response surface and universal kriging, are evaluat ed through two test problems: a simple analytical function and a structural analysis that uses finite elements modeling. Results show that using safety factors is the least efficient me thod, while the other methods are equivalent. Using safety margins results with the least variabi lity, but statistical-based methods prevent better from large unconservative errors. The relati ve equivalence of safety margin and error distribution allows us to use the error distributio n to accurately choose the margin corresponding to a certain level of conservativenes s.
11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2006
Victor Picheny; Nam-Ho Kim; Raphael T. Haftka; Jörg Peters
The paper provides a review of how to estimate a probability of failure from a small sample of data, and shows that the usual estimators of the parameters of the cumulative distribution function are biased, and can lead to unconservative estimations. Then, it explores different ways to make this estimation conservative: one is based on adding constraints when distributions are fitted; the second is based on the use of bootstrap methods. We explore the relationship between the chance that the estimate is conservative and the accuracy of the estimate. In particular, we study the case when we want to achieve a 95% chance to have conservative estimators. Finally, these methods are applied to the problem of a composite panel under thermal loading.
design automation conference | 2009
Felipe A. C. Viana; Victor Picheny; Raphael T. Haftka
The use of surrogates for facilitating optimization and statistical analysis of computationally expensive simulations has become commonplace. Usually, surrogate models are fit to be unbiased (i.e., the error expectation is zero). However, in certain applications, it might be interesting to safely estimate the response (e.g., in structural analysis, the maximum stress must not be underestimated in order to avoid failure). In this work we use safety margins to conservatively compensate for fitting errors associated with surrogates. We propose the use of cross-validation for estimating the required safety margin for a given desired level of conservativeness (percentage of safe predictions). We also check how well we can minimize the losses in accuracy associated with conservative predictor by selecting between alternate surrogates. The approach was tested on two algebraic examples for ten basic surrogates including different instances of kriging, polynomial response surface, radial basis neural networks and support vector regression surrogates. For these examples we found that cross-validation (i) is effective for selecting the safety margin; and (ii) allows us to select a surrogate with the best compromise between conservativeness and loss of accuracy. We then applied the approach to the probabilistic design optimization of a cryogenic tank. This design under uncertainty example showed that the approach can be successfully used in real world applications.Copyright
design automation conference | 2007
Victor Picheny; Nam-Ho Kim; Raphael T. Haftka
The objective of this paper is to provide a method of safely estimating reliability based on small samples. First, it is shown that the commonly used estimators of the parameters of the normal distribution function are biased, and they tend to lead to unconservative estimates of reliability. Then, two ways of making this estimation conservative are proposed: (1) adding constraints when a distribution is fitted to the data to bias it to be conservative, and (2) using the bootstrap method to estimate the bias needed for a given level of conservativeness. The relationship between the accuracy and the conservativeness of the estimates is explored for a normal distribution. In particular, detailed results are presented for the case when the goal is 95% likelihood to be conservative. The bootstrap approach is found to be more accurate for this level of conservativeness. It is then applied to the reliability analysis of a composite panel under thermal loading. Finally, we explore the influence of sample sizes and target probability of failure on estimates quality, and show that for a constant level of conservativeness, small samples and low probabilities can lead to a high risk of large overestimation while this risk is limited to a very reasonable value for samples above.Copyright
Nuclear Science and Engineering | 2013
Yann Richet; Gregory Caplin; J. Crevel; David Ginsbourger; Victor Picheny
Abstract Nuclear criticality safety assessment often requires groupwise Monte Carlo simulations of k-effective in order to check subcriticality of the system of interest. A typical task to be performed by safety assessors is hence to find the worst combination of input parameters of the criticality Monte Carlo code (i.e., leading to maximum reactivity) over the whole operating range. Then, checking subcriticality can be done by solving a maximization problem where the input-output map defined by the Monte Carlo code expectation (or an upper quantile) stands for the objective function or “parametric” model. This straightforward view of criticality parametric calculations complies with recent works in Design of Computer Experiments, an active research field in applied statistics. This framework provides a robust support to enhance and consolidate good practices in criticality safety assessment. Indeed, supplementing the standard “expert-driven” assessment by a suitable optimization algorithm may be helpful to increase the reliability of the whole process and the robustness of its conclusions. Such a new safety practice is intended to rely on both well-suited mathematical tools (compliant optimization algorithms) and computing infrastructure (a flexible grid-computing environment).
50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2009
Rodolphe Le Riche; Victor Picheny; André Meyer; David Ginsbourger; Nam-Ho Kim
This article presents an approach to the optimization of helical involute gears for geometrical feasibility, contact ratio, teeth sliding velocity, stresses and static transmission error (STE). The teeth shape is subject to random perturbations due to wear (a randomized Archard’s wear). The consequences of shape inaccuracies are statistically expressed as a 90% percentile of the STE variation, which is optimized. However, estimating a 90% STE percentile by a Monte Carlo method is computationally too demanding to be included in the optimization iterations. A method is proposed where the Monte Carlo simulations are replaced by a kriging metamodel during the optimization. An originality of the method is that the noise in the empirical percentile, which is inherent to any statistical estimation, is taken into account in the kriging metamodel through an adequately sized nugget effect. The kriging approach is compared to a second method where the STE variation for an average wear profile replaces the percentile estimation.
Structural and Multidisciplinary Optimization | 2010
Victor Picheny; Nam H. Kim; Raphael T. Haftka