Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Audet is active.

Publication


Featured researches published by Charles Audet.


Siam Journal on Optimization | 2002

Analysis of Generalized Pattern Searches

Charles Audet; John E. Dennis

This paper contains a new convergence analysis for the Lewis and Torczon generalized pattern search (GPS) class of methods for unconstrained and linearly constrained optimization. This analysis is motivated by a desire to understand the successful behavior of the algorithm under hypotheses that are satisfied by many practical problems. Specifically, even if the objective function is discontinuous or extended-valued, the methods find a limit point with some minimizing properties. Simple examples show that the strength of the optimality conditions at a limit point depends not only on the algorithm, but also on the directions it uses and on the smoothness of the objective at the limit point in question. The contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties and to the defining directions for the algorithm, and it gives previous results as corollaries.


Siam Journal on Optimization | 2004

A Pattern Search Filter Method for Nonlinear Programming without Derivatives

Charles Audet; John E. Dennis

This paper formulates and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that improves either the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants, or Lagrange multipliers. A key feature of the new algorithm is that it preserves the division into SEARCH and local POLL steps, which allows the explicit use of inexpensive surrogates or random search heuristics in the SEARCH step. It is shown here that the algorithm identifies limit points at which optimality conditions depend on local smoothness of the functions and, to a greater extent, on the choice of a certain set of directions. Stronger optimality conditions are guaranteed for smoother functions and, in the constrained case, for a fortunate choice of the directions on which the algorithm depends. These directional conditions generalize those given previously for linear constraints, but they do not require a feasible starting point. In the absence of general constraints, the proposed algorithm and its convergence analysis generalize previous work on unconstrained, bound constrained, and linearly constrained generalized pattern search. The algorithm is illustrated on some test examples and on an industrial wing planform engineering design application.


Mathematical Programming | 2000

A branch and cut algorithm for nonconvex quadratically constrained quadratic programming

Charles Audet; Pierre Hansen; Brigitte Jaumard; Gilles Savard

Abstract.We present a branch and cut algorithm that yields in finite time, a globally ε-optimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using Reformulation-Linearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported that include standard test problems taken from the literature. Some of these problems are solved for the first time with a proof of global optimality.


Siam Journal on Optimization | 2000

Pattern Search Algorithms for Mixed Variable Programming

Charles Audet; John E. Dennis

Many engineering optimization problems involve a special kind of discrete variable that can be represented by a number, but this representation has no significance. Such variables arise when a decision involves some situation like a choice from an unordered list of categories. This has two implications: The standard approach of solving problems with continuous relaxations of discrete variables is not available, and the notion of local optimality must be defined through a user-specified set of neighboring points. We present a class of direct search algorithms to provide limit points that satisfy some appropriate necessary conditions for local optimality for such problems. We give a more expensive version of the algorithm that guarantees additional necessary optimality conditions. A small example illustrates the differences between the two versions. A real thermal insulation system design problem illustrates the efficacy of the user controls for this class of algorithms.


Siam Journal on Optimization | 2009

A Progressive Barrier for Derivative-Free Nonlinear Programming

Charles Audet; John E. Dennis

We propose a new constraint-handling approach for general constraints that is applicable to a widely used class of constrained derivative-free optimization methods. As in many methods that allow infeasible iterates, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or barrier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased adaptively as the iteration evolves. We test this progressive barrier (PB) approach versus the extreme barrier (EB) with the generalized pattern search (Gps) and the lower triangular mesh adaptive direct search (LTMads) methods for nonlinear derivative-free optimization. Tests are also conducted using the Gps-filter, which uses a version of the Fletcher-Leyffer filter approach. We know that Gps cannot be shown to yield kkt points with this strategy or the filter, but we use the Clarke nonsmooth calculus to prove Clarke stationarity of the sequences of feasible and infeasible trial points for LTMads-PB. Numerical experiments are conducted on three academic test problems with up to 50 variables and on a chemical engineering problem. The new LTMads-PB method generally outperforms our LTMads-EB in the case where no feasible initial points are known, and it does as well when feasible points are known. which leads us to recommend LTMads-PB. Thus the LTMads-PB is a useful practical extension of our earlier LTMads-EB algorithm, particularly in the common case for real problems where no feasible point is known. The same conclusions hold for Gps-PB versus Gps-EB.


Siam Journal on Optimization | 2009

OrthoMADS: A Deterministic MADS Instance with Orthogonal Directions

Mark A. Abramson; Charles Audet; John E. Dennis; Sébastien Le Digabel

The purpose of this paper is to introduce a new way of choosing directions for the mesh adaptive direct search (Mads) class of algorithms. The advantages of this new OrthoMads instantiation of Mads are that the polling directions are chosen deterministically, ensuring that the results of a given run are repeatable, and that they are orthogonal to each other, which yields convex cones of missed directions at each iteration that are minimal in a reasonable measure. Convergence results for OrthoMads follow directly from those already published for Mads, and they hold deterministically, rather than with probability one, as is the case for LtMads, the first Mads instance. The initial numerical results are quite good for both smooth and nonsmooth and constrained and unconstrained problems considered here.


Siam Journal on Optimization | 2008

Multiobjective Optimization Through a Series of Single-Objective Formulations

Charles Audet; Gilles Savard; Walid Zghal

This work deals with bound constrained multiobjective optimization (MOP) of nonsmooth functions for problems where the structure of the objective functions either cannot be exploited, or are absent. Typical situations arise when the functions are computed as the result of a computer simulation. We first present definitions and optimality conditions as well as two families of single-objective formulations of MOP. Next, we propose a new algorithm called for the biobjective optimization (BOP) problem (i.e., MOP with two objective functions). The property that Pareto points may be ordered in BOP and not in MOP is exploited by our algorithm. generates an approximation of the Pareto front by solving a series of single-objective formulations of BOP. These single-objective problems are solved using the recent (mesh adaptive direct search) algorithm for nonsmooth optimization. The Pareto front approximation is shown to satisfy some first order necessary optimality conditions based on the Clarke calculus. Finally, is tested on problems from the literature designed to illustrate specific difficulties encountered in biobjective optimization, such as a nonconvex or disjoint Pareto front, local Pareto fronts, or a nonuniform Pareto front.


Journal of Optimization Theory and Applications | 1997

Links between linear bilevel and mixed 0-1 programming problems

Charles Audet; Pierre Hansen; Brigitte Jaumard; Gilles Savard

We study links between the linear bilevel and linear mixed 0–1 programming problems. A new reformulation of the linear mixed 0–1 programming problem into a linear bilevel programming one, which does not require the introduction of a large finite constant, is presented. We show that solving a linear mixed 0–1 problem by a classical branch-and-bound algorithm is equivalent in a strong sense to solving its bilevel reformulation by a bilevel branch-and-bound algorithm. The mixed 0–1 algorithm is embedded in the bilevel algorithm through the aforementioned reformulation; i.e., when applied to any mixed 0–1 instance and its bilevel reformulation, they generate sequences of subproblems which are identical via the reformulation.


Siam Journal on Optimization | 2006

Convergence of Mesh Adaptive Direct Search to Second-Order Stationary Points

Mark A. Abramson; Charles Audet

A previous analysis of second-order behavior of generalized pattern search algorithms for unconstrained and linearly constrained minimization is extended to the more general class of mesh adaptive direct search (MADS) algorithms for general constrained optimization. Because of the ability of MADS to generate an asymptotically dense set of search directions, we are able to establish reasonable conditions under which a subsequence of MADS iterates converges to a limit point satisfying second-order necessary or sufficient optimality conditions for general set-constrained optimization problems.


Siam Journal on Optimization | 2006

Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization

Charles Audet; Dominique Orban

The objectives of this paper are twofold. We devise a general framework for identifying locally optimal algorithmic parameters. Algorithmic parameters are treated as decision variables in a problem for which no derivative knowledge or existence is assumed. A derivative-free method for optimization seeks to minimize some measure of performance of the algorithm being fine-tuned. This measure is treated as a black-box and may be chosen by the user. Examples are given in the text. The second objective is to illustrate this framework by specializing it to the identification of locally optimal trust-region parameters in unconstrained optimization. The derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods. We illustrate the flexibility of the latter and in particular make provision for surrogate objectives. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. Each function call may take several hours and may not always return a predictable result. A tailored surrogate function is used to guide the search towards a local solution. The parameters thus identified differ from traditionally used values, and allow one to solve a problem that remained otherwise unsolved in a reasonable time using traditional values.

Collaboration


Dive into the Charles Audet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sébastien Le Digabel

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gilles Savard

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Warren Hare

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mark A. Abramson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christophe Tribes

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge