Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Michael Lewis is active.

Publication


Featured researches published by Robert Michael Lewis.


Siam Review | 2003

Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods ∗

Tamara G. Kolda; Robert Michael Lewis; Virginia Torczon

Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed com- puting. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow general- ization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.


Structural Optimization | 1997

A Trust Region Framework for Managing the Use of Approximation Models in Optimization

Natalia Alexandrov; John E. Dennis; Robert Michael Lewis

This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of varying fidelity in optimization. By robust global behaviour we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach presented is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original high-fidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild requirements on the approximations. The approximations used in the framework can be of any nature appropriate to an application; for instance, they can be represented by analyses, simulations, or simple algebraic models. This paper introduces the approach and outlines the convergence analysis.


Siam Journal on Optimization | 1994

Problem Formulation for Multidisciplinary Optimization

Evin J. Cramer; John E. Dennis; Paul D. Frank; Robert Michael Lewis; Gregory R. Shubin

This paper is about multidisciplinary (design) optimization, or MDO, the coupling of two or more analysis disciplines with numerical optimization.The paper has three goals. First, it is an expository introduction to MDO aimed at those who do research on optimization algorithms, since the optimization community has much to contribute to this important class of computational engineering problems. Second, this paper presents to the MDO research community a new abstraction for multidisciplinary analysis and design problems as well as new decomposition formulations for these problems. Third, the “individual discipline feasible” (IDF) approaches introduced here make use of existing specialized analysis codes, and they introduce significant opportunities for coarse-grained computational parallelism particularly well suited to heterogeneous computing environments.The key distinguishing characteristic of the three fundamental approaches to MDO formulation discussed here is the kind of disciplinary feasibility that...


Journal of Computational and Applied Mathematics | 2000

Direct search methods: then and now

Robert Michael Lewis; Virginia Torczon; Michael W. Trosset

We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivative-free algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized by the absence of the construction of a model of the objective. We then consider a number of the classical direct search methods and discuss what research in the intervening years has uncovered about these algorithms. In particular, while the original direct search methods were consciously based on straightforward heuristics, more recent analysis has shown that in most - but not all - cases these heuristics actually suffice to ensure global convergence of at least one subsequence of the sequence of iterates to a first-order stationary point of the objective function.


Siam Journal on Optimization | 1999

PATTERN SEARCH ALGORITHMS FOR BOUND CONSTRAINED MINIMIZATION

Robert Michael Lewis; Virginia Torczon

We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.


Siam Journal on Optimization | 1999

Pattern Search Methods for Linearly Constrained Minimization

Robert Michael Lewis; Virginia Torczon

We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush--Kuhn--Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative of the objective. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.


Siam Journal on Optimization | 2002

A globally convergent augmented Lagrangian pattern search algorithm for optimization with general constraints and simple bounds

Robert Michael Lewis; Virginia Torczon

We give a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint [SIAM J. Numer. Anal., 28 (1991), pp. 545--572]. In the pattern search adaptation, we solve the bound constrained subproblem approximately using a pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of the subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. As far as we know, this is the first provably convergent direct search method for general nonlinear programming.


Journal of Aircraft | 2001

Approximation and Model Management in Aerodynamic Optimization with Variable-Fidelity Models

Natalia Alexandrov; Robert Michael Lewis; Clyde R. Gumbert; Lawrence L. Green; Perry A. Newman

This workdiscussesan approach,e rst-orderapproximation and modelmanagementoptimization (AMMO), for solving design optimization problems that involve computationally expensive simulations. AMMO maximizes the use of lower-e delity, cheaper models in iterative procedures with occasional, but systematic, recourse to highere delity, more expensive models for monitoring the progress of design optimization. A distinctive feature of the approach is thatit is globally convergent to a solution oftheoriginal, high-e delity problem. VariantsofAMMObased on three nonlinear programming algorithms are demonstrated on a three-dimensional aerodynamic wing optimization problemand atwo-dimensionalairfoiloptimizationproblem. Euleranalysisonmeshesof varying degrees of ree nement provides a suite of variable-e delity models. Preliminary results indicate threefold savings in terms of high-e delity analyses for the three-dimensional problem and twofold savings for the two-dimensional problem.


38th Aerospace Sciences Meeting and Exhibit | 1999

Optimization with variable-fidelity models applied to wing design

Natalia Alexandrov; Robert Michael Lewis; Clyde R. Gumbert; Larry L. Green; Perry A. Newman

This work discusses an approach, the Approximation Management Framework (AMF), for solving optimization problems that involve computationally expensive simulations. AMF aims to maximize the use of lower-fidelity, cheaper models in iterative procedures with occasional, but systematic, recourse to higher-fidelity, more expensive models for monitoring the progress of the algorithm. The method is globally convergent to a solution of the original, high-fidelity problem. Three versions of AMF, based on three nonlinear programming algorithms, are demonstrated on a 3D aerodynamic wing optimization problem and a 2D airfoil optimization problem. In both cases Euler analysis solved on meshes of various refinement provides a suite of variable-fidelity models. Preliminary results indicate threefold savings in terms of high-fidelity analyses in case of the 3D problem and twofold savings for the 2D problem.


AIAA Journal | 2002

Analytical and Computational Aspects of Collaborative Optimization for Multidisciplinary Design

Natalia Alexandrov; Robert Michael Lewis

Analytical features of multidisciplinary optimization (MDO) problem formulations have significant practical consequences for the ability of nonlinear programming algorithms to solve the resulting computational optimization problems reliably and efficiently. We explore this important but frequently overlooked fact using the notion of disciplinary autonomy. Disciplinary autonomy is a desirable goal in formulating and solving MDO problems; however, the resulting system optimization problems are frequently difficult to solve. We illustrate the implications of MDO problem formulation for the tractability of the resulting design optimization problem by examining a representative class of MDO problem formulations known as collaborative optimization. We also discuss an alternative problem formulation, distributed analysis optimization, that yields a more tractable computational optimization problem.

Collaboration


Dive into the Robert Michael Lewis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamara G. Kolda

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony T. Patera

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge