Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Mehmani is active.

Publication


Featured researches published by Ali Mehmani.


Journal of Mechanical Design | 2014

Characterizing Uncertainty Attributable to Surrogate Models

Jie Zhang; Souma Chowdhury; Ali Mehmani; Achille Messac

This paper investigates the characterization of the uncertainty in the prediction of surrogate models. In the practice of engineering, where predictive models are pervasively used, the knowledge of the level of modeling error in any region of the design space is uniquely helpful for design exploration and model improvement. The lack of methods that can explore the spatial variation of surrogate error levels in a wide variety of surrogates (i.e., model-independent methods) leaves an important gap in our ability to perform design domain exploration. We develop a novel framework, called domain segmentation based on uncertainty in the surrogate (DSUS) to segregate the design domain based on the level of local errors. The errors in the surrogate estimation are classified into physically meaningful classes based on the user’s understanding of the system and/or the accuracy requirements for the concerned system analysis. The leave-one-out cross-validation technique is used to quantity the local errors. Support vector machine (SVM) is implemented to determine the boundaries between error classes, and to classify any new design point into the pertinent error class. We also investigate the effectiveness of the leave-oneout cross-validation technique in providing a local error measure, through comparison with actual local errors. The utility of the DSUS framework is illustrated using two different surrogate modeling methods: (i) the Kriging method and (ii) the adaptive hybrid functions (AHF). The DSUS framework is applied to a series of standard test problems and engineering problems. In these case studies, the DSUS framework is observed to provide reasonable accuracy in classifying the design-space based on error levels. More than 90% of the test points are accurately classified into the appropriate error classes. [DOI: 10.1115/1.4026150]


design automation conference | 2014

Concurrent Surrogate Model Selection (COSMOS) Based on Predictive Estimation of Model Fidelity

Souma Chowdhury; Ali Mehmani; Achille Messac

One of the primary drawbacks plaguing wider acceptance of surrogate models is their low fidelity in general. This issue can be in a large part attributed to the lack of automated model selection techniques, particularly ones that do not make limiting assumptions regarding the choice of model types and kernel types. A novel model selection technique was recently developed to perform optimal model search concurrently at three levels: (i) optimal model type (e.g., RBF), (ii) optimal kernel type (e.g., multiquadric), and (iii) optimal values of hyper-parameters (e.g., shape parameter) that are conventionally kept constant. The error measures to be minimized in this optimal model selection process are determined by the Predictive Estimation of Model Fidelity (PEMF) method, which has been shown to be significantly more accurate than typical cross-validation-based error metrics. In this paper, we make the following important advancements to the PEMF-based model selection framework, now called the Concurrent Surrogate Model Selection or COSMOS framework: (i) The optimization formulation is modified through binary coding to allow surrogates with differing numbers of candidate kernels and kernels with differing numbers of hyper-parameters (which was previously not allowed). (ii) A robustness criterion, based on the variance of errors, is added to the existing criteria for model selection. (iii) A larger candidate pool of 16 surrogate-kernel combinations is considered for selection — possibly making COSMOS one of the most comprehensive surrogate model selection framework (in theory and implementation) currently available. The effectiveness of the COSMOS framework is demonstrated by successfully applying it to four benchmark problems (with 2–30 variables) and an airfoil design problem. The optimal model selection results illustrate how diverse models provide important tradeoffs for different problems.© 2014 ASME


10th AIAA Multidisciplinary Design Optimization Conference | 2014

A Novel Approach to Simultaneous Selection of Surrogate Models, Constitutive Kernels, and Hyper-parameter Values

Ali Mehmani; Souma Chowdhury; Jie Zhang; Achille Messac

Owing to the multitude of surrogate modeling techniques, developed in the recent years and the diverse characteristics offered by them, automated adaptive model selection approaches could be helpful in selecting the most suitable surrogate for a given problem. Surrogate selection could be performed at three different levels: (i) model type selection, (ii) basis (or kernel) function selection, and (iii) hyper-parameter selection where hyperparameters are those kernel parameters that are generally given by the users. Unlike the majority of existing model selection techniques, this paper explores the development of a method that performs selection coherently at all the three levels. In this context, the REES method is used to provide measures of the median and maximum errors of a candidate surrogate model. Two approaches are used for the 3-level selection; (i) A Cascaded approach performs each level in a nested loop in the order going from model-kernel-hyperparameters; (ii) A more advanced One-Step approach solves a MINLP to simultaneously optimize the model, kernel, and hyper-parameters. In both approaches, multiobjective optimization is performed to yield the best trade-offs between the estimated median and maximum errors. Candidate surrogates that are considered include (i) Kriging, (ii) Radial Basis Function (RBF), and (iii) Support Vector Regression (SVR), and multiple candidate kernels are allowed within these surrogate models. The 3-level REES-based model selection is compared with model selection based on error estimated on a large set of additional test points, for validation purposes. Numerical experiments on a 2-variable, 6-variable, and 18-variable test problems, and wind farm power generation problem, show that the proposed approach provides unique flexibility in model selection and is also reasonably accurate when compared with selection based on errors estimated on additional test points.


Scopus | 2013

Quantifying regional error in surrogates by modeling its relationship with sample density

Ali Mehmani; Souma Chowdhury; Jie Zhang; Weiyang Tong; Achille Messac

Approximation models (or surrogate models) provide an efficient substitute to expensive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such approximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) information regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Regional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as intermediate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of interest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.


53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference<BR>20th AIAA/ASME/AHS Adaptive Structures Conference<BR>14th AIAA | 2012

Surrogate-Based Design Optimization with Adaptive Sequential Sampling

Ali Mehmani; Jie Zhang; Souma Chowdhury; Achille Messac

Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire training data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.


Scopus | 2015

Adaptive Switching of Variable-Fidelity Models in Population-Based Optimization

Ali Mehmani; Souma Chowdhury; Weiyang Tong; Achille Messac

This article presents a novel model management technique to be implemented in population-based heuristic optimization. This technique adaptively selects different computational models (both physics-based models and surrogate models) to be used during optimization, with the overall objective to result in optimal designs with high fidelity function estimates at a reasonable computational expense. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low fidelity models such as given by the vortex lattice method, or a high fidelity finite volume model, or a surrogate model that substitutes the high-fidelity model. The information from these models with different levels of fidelity is integrated into the heuristic optimization process using the new adaptive model switching (AMS) technique. The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criterion is based on whether the uncertainty associated with the current model output dominates the latest improvement of the relative fitness function, where both the model output uncertainty and the function improvement (across the population) are expressed as probability distributions. For practical implementation, a measure of critical probability is used to regulate the degree of error that will be allowed, i.e., the fraction of instances where the improvement will be allowed to be lower than the model error, without having to change the model. In the absence of this critical probability, model management might become too conservative, leading to premature model-switching and thus higher computing expense. The proposed AMS-based optimization is applied to two design problems through Particle Swarm Optimization, which are: (i) Airfoil design, and (ii) Cantilever composite beam design. The application case studies of AMS illustrated: (i) the computational advantage of this method over purely high fidelity model-based optimization, and (ii) the accuracy advantage of this method over purely low fidelity model-based optimization.


Journal of Mechanical Design | 2015

Sensitivity of Wind Farm Output to Wind Conditions, Land Configuration, and Installed Capacity, Under Different Wake Models

Weiyang Tong; Souma Chowdhury; Ali Mehmani; Achille Messac; Jie Zhang

In conventional wind farm design and optimization, analytical wake models are generally used to estimate the wake-induced power losses. Different wake models often yield significantly dissimilar estimates of wake velocity deficit and wake width. In this context, the wake behavior, as well as the subsequent wind farm power generation, can be expressed as functions of a series of key factors. A quantitative understanding of the relative impact of each of these key factors, particularly under the application of different wake models, is paramount to reliable quantification of wind farm power generation. Such an understanding is however not readily evident in the current state of the art in wind farm design. To fill this important gap, this paper develops a comprehensive sensitivity analysis (SA) of wind farm performance with respect to the key natural and design factors. Specifically, the sensitivities of the estimated wind farm power generation and maximum farm output potential are investigated with respect to the following key factors: (i) incoming wind speed, (ii) ambient turbulence, (iii) land area per MW installed, (iv) land aspect ratio, and (v) nameplate capacity. The extended Fourier amplitude sensitivity test (e-FAST), which helpfully provides a measure of both first-order and total-order sensitivity indices, is used for this purpose. The impact of using four different analytical wake models (i.e., Jensen, Frandsen, Larsen, and Ishihara models) on the wind farm SA is also explored. By applying this new SA framework, it was observed that, when the incoming wind speed is below the turbine rated speed, the impact of incoming wind speed on the wind farm power generation is dominant, irrespective of the choice of wake models. Interestingly, for array-like wind farms, the relative importance of each input parameter was found to vary significantly with the choice of wake models, i.e., appreciable differences in the sensitivity indices (of up to 70%) were observed across the different wake models. In contrast, for optimized wind farm layouts, the choice of wake models was observed to have marginal impact on the sensitivity indices. [DOI: 10.1115/1.4029892]


ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2013

SENSITIVITY OF ARRAY-LIKE AND OPTIMIZED WIND FARM OUTPUT TO KEY FACTORS AND CHOICE OF WAKE MODELS

Weiyang Tong; Souma Chowdhury; Ali Mehmani; Jie Zhang; Achille Messac

The creation of wakes, with unique turbulence characteristics, downstream of turbines significantly increases the complexity of the boundary layer flow within a wind farm. In conventional wind farm design, analytical wake models are generally used to compute the wake-induced power losses, wi th different wake models yielding significantly different estimates. In this context, the wake behavior, and subsequently the far m power generation, can be expressed as functions of a series o key factors. A quantitative understanding of the relative i mpact of each of these factors is paramount to the development of more reliable power generation models; such an understandi ng is however missing in the current state of the art in wind farm design. In this paper, we quantitatively explore how the far m


12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2012

Uncertainty Quantication in Surrogate Models Based on Pattern Classication of Cross-validation Errors

Jie Zhang; Souma Chowdhury; Ali Mehmani; Achille Messac

This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource assessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more condence when using these models.


design automation conference | 2015

Variable-Fidelity Optimization With In-Situ Surrogate Model Refinement

Ali Mehmani; Souma Chowdhury; Achille Messac

Owing to the typical low fidelity of surrogate models, it is often challenging to accomplish reliable optimum solutions in Surrogate-Based Optimization (SBO) for complex nonlinear problems. This paper addresses this challenge by developing a new model-independent approach to refine the surrogate model during optimization, with the objective to maintain a desired level of fidelity and robustness “where” and “when” needed. The proposed approach, called Adaptive Model Refinement (AMR), is designed to work particularly with population-based optimization algorithms. In AMR, reconstruction of the model is performed by sequentially adding a batch of new samples at any given iteration (of SBO) when a refinement metric is met. This metric is formulated by comparing (1) the uncertainty associated with the outputs of the current model, and (2) the distribution of the latest fitness function improvement over the population of candidate designs. Whenever the model-refinement metric is met, the history of the fitness function improvement is used to determine the desired fidelity for the upcoming iterations of SBO. Predictive Estimation of Model Fidelity (an advanced surrogate model error metric) is applied to determine the model uncertainty and the batch size for the samples to be added. The location of the new samples in the input space is determined based on a hypercube enclosing promising candidate designs, and a distance-based criterion that minimizes the correlation between the current sample points and the new points. The powerful mixed-discrete PSO algorithm is used in conjunction with different surrogate models (e.g., Kriging, RBF, SVR) to apply the new AMR method. The performance of the proposed AMR-based SBO is investigated through three different benchmark functions.Copyright

Collaboration


Dive into the Ali Mehmani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Zhang

Office of Scientific and Technical Information

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mucun Sun

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge