Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominique Orban is active.

Publication


Featured researches published by Dominique Orban.


Acta Numerica | 2005

Numerical methods for large-scale nonlinear optimization

Nicholas I. M. Gould; Dominique Orban; Philippe L. Toint

Recent developments in numerical methods for solving large differentiable nonlinear optimization problems are reviewed. State-of-the-art algorithms for solving unconstrained, bound-constrained, linearly constrained and non-linearly constrained problems are discussed. As well as important conceptual advances and theoretical aspects, emphasis is also placed on more practical issues, such as software availability.


Siam Journal on Optimization | 2006

Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization

Charles Audet; Dominique Orban

The objectives of this paper are twofold. We devise a general framework for identifying locally optimal algorithmic parameters. Algorithmic parameters are treated as decision variables in a problem for which no derivative knowledge or existence is assumed. A derivative-free method for optimization seeks to minimize some measure of performance of the algorithm being fine-tuned. This measure is treated as a black-box and may be chosen by the user. Examples are given in the text. The second objective is to illustrate this framework by specializing it to the identification of locally optimal trust-region parameters in unconstrained optimization. The derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods. We illustrate the flexibility of the latter and in particular make provision for surrogate objectives. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. Each function call may take several hours and may not always return a predictable result. A tailored surrogate function is used to guide the search towards a local solution. The parameters thus identified differ from traditionally used values, and allow one to solve a problem that remained otherwise unsolved in a reasonable time using traditional values.


Mathematical Programming | 2000

A primal-dual trust-region algorithm for non-convex nonlinear programming

Andrew R. Conn; Nicholas I. M. Gould; Dominique Orban; Philippe L. Toint

Abstract.A new primal-dual algorithm is proposed for the minimization of non-convex objective functions subject to general inequality and linear equality constraints. The method uses a primal-dual trust-region model to ensure descent on a suitable merit function. Convergence is proved to second-order critical points from arbitrary starting points. Numerical results are presented for general quadratic programs.


Siam Journal on Optimization | 2000

Superlinear Convergence of Primal-Dual Interior Point Algorithms for Nonlinear Programming

Nicholas I. M. Gould; Dominique Orban; Annick Sartenaer; Philippe L. Toint

The local convergence properties of a class of primal-dual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the log-barrier merit function is approximately minimized subject to satisfying the linear equality constraints, and an outer iteration that specifies both the decrease in the barrier parameter and the level of accuracy for the inner minimization. Under nondegeneracy assumptions, it is shown that, asymptotically, for each value of the barrier parameter, solving a single primal-dual linear system is enough to produce an iterate that already matches the barrier subproblem accuracy requirements. The asymptotic rate of convergence of the resulting algorithm is Q-superlinear and may be chosen arbitrarily close to quadratic. Furthermore, this rate applies componentwise. These results hold in particular for the method described in [A. R. Conn, N. I. M. Gould, D. Orban, and P. L. Toint, Math. Program. Ser. B, 87 (2000), pp. 215--249] and indicate that the details of its inner minimization are irrelevant in the asymptotics, except for its accuracy requirements.


Mathematical Programming Computation | 2012

A primal-dual regularized interior-point method for convex quadratic programs

Michael P. Friedlander; Dominique Orban

Interior-point methods in augmented form for linear and convex quadratic programming require the solution of a sequence of symmetric indefinite linear systems which are used to derive search directions. Safeguards are typically required in order to handle free variables or rank-deficient Jacobians. We propose a consistent framework and accompanying theoretical justification for regularizing these linear systems. Our approach can be interpreted as a simultaneous proximal-point regularization of the primal and dual problems. The regularization is termedexact to emphasize that, although the problems are regularized, the algorithm recovers a solution of the original problem, for appropriate values of the regularization parameters.


A Quarterly Journal of Operations Research | 2005

Sensitivity of trust-region algorithms to their parameters

Nicholas I. M. Gould; Dominique Orban; Annick Sartenaer; Phillipe L. Toint

Abstract.In this paper, we examine the sensitivity of trust-region algorithms on the parameters related to the step acceptance and update of the trust region. We show, in the context of unconstrained programming, that the numerical efficiency of these algorithms can easily be improved by choosing appropriate parameters. Recommended ranges of values for these parameters are exhibited on the basis of extensive numerical tests.


Les Cahiers du GERAD | 2015

An Interior-Point l 1 -Penalty Method for Nonlinear Optimization

Nicholas I. M. Gould; Dominique Orban; Philippe L. Toint

We describe a mixed interior/exterior-point method for nonlinear programming that handles constraints by way of an l1-penalty function. The penalty problem is reformulated as a smooth inequality-constrained problem that always possesses bounded multipliers, and that may be solved using interior-point techniques as finding a strictly feasible point is trivial. If finite multipliers exist for the original problem, exactness of the penalty function eliminates the need to drive the penalty parameter to infinity. If the penalty parameter needs to increase without bound and if feasibility is ultimately attained, a certificate of degeneracy is delivered. Global and fast local convergence of the proposed scheme are established and practical aspects of the method are discussed.


Computational Optimization and Applications | 2008

Dynamic updates of the barrier parameter in primal-dual methods for nonlinear programming

Paul Armand; Joël Benoist; Dominique Orban

Abstract We introduce a framework in which updating rules for the barrier parameter in primal-dual interior-point methods become dynamic. The original primal-dual system is augmented to incorporate explicitly an updating function. A Newton step for the augmented system gives a primal-dual Newton step and also a step in the barrier parameter. Based on local information and a line search, the decrease of the barrier parameter is automatically adjusted. We analyze local convergence properties, report numerical experiments on a standard collection of nonlinear problems and compare our results to a state-of-the-art interior-point implementation. In many instances, the adaptive algorithm reduces the number of iterations and of function evaluations. Its design guarantees a better fit between the magnitudes of the primal-dual residual and of the barrier parameter along the iterations.


Siam Journal on Optimization | 2014

Bounds on Eigenvalues of Matrices Arising from Interior-Point Methods

Chen Greif; Erin Moulding; Dominique Orban

Interior-point methods feature prominently among numerical methods for inequality-constrained optimization problems, and involve the need to solve a sequence of linear systems that typically become...


Computers & Operations Research | 2010

A new version of the Improved Primal Simplex for degenerate linear programs

Vincent Raymond; François Soumis; Dominique Orban

The Improved Primal Simplex (IPS) algorithm [Elhallaoui I, Metrane A, Desaulniers G, Soumis F. An Improved Primal Simplex algorithm for degenerate linear programs. SIAM Journal of Optimization, submitted for publication] is a dynamic constraint reduction method particularly effective on degenerate linear programs. It is able to achieve a reduction in CPU time of over a factor of three on some problems compared to the commercial implementation of the simplex method CPLEX. We present a number of further improvements and effective parameter choices for IPS. On certain types of degenerate problems, our improvements yield CPU times lower than those of CPLEX by a factor of 12.

Collaboration


Dive into the Dominique Orban's collaboration.

Top Co-Authors

Avatar

Nicholas I. M. Gould

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Mario Arioli

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Philippe L. Toint

Desautels Faculty of Management

View shared research outputs
Top Co-Authors

Avatar

Charles Audet

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Michael A. Saunders

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahad Dehghani

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Jean-Louis Goffin

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge