Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. J. D. Powell is active.

Publication


Featured researches published by M. J. D. Powell.


Mathematical Programming | 1977

Restart procedures for the conjugate gradient method

M. J. D. Powell

The conjugate gradient method is particularly useful for minimizing functions of very many variables because it does not require the storage of any matrices. However the rate of convergence of the algorithm is only linear unless the iterative procedure is “restarted” occasionally. At present it is usual to restart everyn or (n + 1) iterations, wheren is the number of variables, but it is known that the frequency of restarts should depend on the objective function. Therefore the main purpose of this paper is to provide an algorithm with a restart procedure that takes account of the objective function automatically. Another purpose is to study a multiplying factor that occurs in the definition of the search direction of each iteration. Various expressions for this factor have been proposed and often it does not matter which one is used. However now some reasons are given in favour of one of these expressions. Several numerical examples are reported in support of the conclusions of this paper.


Archive | 1994

A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation

M. J. D. Powell

An iterative algorithm is proposed for nonlinearly constrained optimization calculations when there are no derivatives. Each iteration forms linear approximations to the objective and constraint functions by interpolation at the vertices of a simplex and a trust region bound restricts each change to the variables. Thus a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been found so far, according to a merit function that gives attention to the greatest constraint violation. The trust region radius ρ is never increased, and it is reduced when the approximations of a well-conditioned simplex fail to yield an improvement to the variables, until ρ reaches a prescribed value that controls the final accuracy. Some convergence properties and several numerical results are given, but there are no more than 9 variables in these calculations because linear approximations can be highly inefficient. Nevertheless, the algorithm is easy to use for small numbers of variables.


Mathematical Programming | 1978

Algorithms for nonlinear constraints that use lagrangian functions

M. J. D. Powell

Lagrangian functions are the basis of many of the more successful methods for nonlinear constraints in optimization calculations. Sometimes they are used in conjunction with linear approximations to the constraints and sometimes penalty terms are included to allow the use of algorithms for unconstrained optimization. Much has been discovered about these techniques during the last eight years and this paper gives a view of the progress and understanding that has been achieved and its relevance to practical algorithms. A particular method is recommended that seems to be more powerful than the author believed to be possible at the beginning of 1976.


Archive | 1983

Variable Metric Methods for Constrained Optimization

M. J. D. Powell

Variable metric methods solve nonlinearly constrained optimization problems, using calculated first derivatives and a single positive definite matrix, which holds second derivative information that is obtained automatically. The theory of these methods is shown by analysing the global and local convergence properties of a basic algorithm, and we find that superlinear convergence requires less second derivative information than in the unconstrained case. Moreover, in order to avoid the difficulties of inconsistent linear approximations to constraints, careful consideration is given to the calculation of search directions by unconstrained minimization subproblems. The Maratos effect and relations to reduced gradient algorithms are studied briefly.


ACM Transactions on Mathematical Software | 1977

Piecewise Quadratic Approximations on Triangles

M. J. D. Powell; Malcolm A. Sabin

The problem of constructing a function 4,(x, y) of two variables on a triangle, such that ~(x, y) and its first derivatives take given values at the vertices, where ¢(x, y) is composed of quadratic pieces, is considered. Two methods of constructing piecewise quadratic approximations are described which have the property that, if they are applied on each triangle of a triangulation, then ~(x, y) and its first derivatives are continuous everywhere.


Acta Numerica | 1998

Direct search algorithms for optimization calculations

M. J. D. Powell

Many different procedures have been proposed for optimization calculations when first derivatives are not available. Further, several researchers have contributed to the subject, including some who wish to prove convergence theorems, and some who wish to make any reduction in the least calculated value of the objective function. There is not even a key idea that can be used as a foundation of a review, except for the problem itself, which is the adjustment of variables so that a function becomes least, where each value of the function is returned by a subroutine for each trial vector of variables. Therefore the paper is a collection of essays on particular strategies and algorithms, in order to consider the advantages, limitations and theory of several techniques. The subjects addressed are line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing. We study the main features of the methods themselves, instead of providing a catalogue of references to published work, because an understanding of these features may be very helpful to future research.


Archive | 2006

The NEWUOA software for unconstrained optimization without derivatives

M. J. D. Powell

The NEWUOA software seeks the least value of a function F(x), x∈Rn, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q≈F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the variables. When Q is revised, the new Q interpolates F at m points, the value m = 2n + 1 being recommended. The remaining freedom in the new Q is taken up by minimizing the Frobenius norm of the change to ∇2Q. Only one interpolation point is altered on each iteration. Thus, except for occasional origin shifts, the amount of work per iteration is only of order (m+n)2, which allows n to be quite large. Many questions were addressed during the development of NEWUOA, for the achievement of good accuracy and robustness. They include the choice of the initial quadratic model, the need to maintain enough linear independence in the interpolation conditions in the presence of computer rounding errors, and the stability of the updating of certain matrices that allow the fast revision of Q. Details are given of the techniques that answer all the questions that occurred. The software was tried on several test problems. Numerical results for nine of them are reported and discussed, in order to demonstrate the performance of the software for up to 160 variables.


Nonlinear Programming#R##N#Proceedings of a Symposium Conducted by the Mathematics Research Center, the University of Wisconsin–Madison, May 4–6, 1970 | 1970

A New Algorithm for Unconstrained Optimization

M. J. D. Powell

A new algorithm is described for calculating the least value of a given differentiable function of several variables. The user must program the evaluation of the function and its first derivatives. Some convergence theorems are given that impose very mild conditions on the objective function. These theorems, together with some numerical results, indicate that the new method may be preferable to current algorithms for solving many unconstrained minimization problems.


Mathematical Programming | 2002

UOBYQA: unconstrained optimization by quadratic approximation

M. J. D. Powell

Abstract.UOBYQA is a new algorithm for general unconstrained optimization calculations, that takes account of the curvature of the objective function, F say, by forming quadratic models by interpolation. Therefore, because no first derivatives are required, each model is defined by ½(n+1)(n+2) values of F, where n is the number of variables, and the interpolation points must have the property that no nonzero quadratic polynomial vanishes at all of them. A typical iteration of the algorithm generates a new vector of variables,


Optimization Letters | 1975

CONVERGENCE PROPERTIES OF A CLASS OF MINIMIZATION ALGORITHMS

M. J. D. Powell

\widetilde{\underline{x}}

Collaboration


Dive into the M. J. D. Powell's collaboration.

Top Co-Authors

Avatar

Ya-Xiang Yuan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

R. K. Beatson

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

J. K. Reid

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ge Ren-pu

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. M. Tan

University of Canterbury

View shared research outputs
Researchain Logo
Decentralizing Knowledge