Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C. J. Price is active.

Publication


Featured researches published by C. J. Price.


Siam Journal on Optimization | 2000

On the Convergence of Grid-Based Methods for Unconstrained Optimization

I. D. Coope; C. J. Price

The convergence of direct search methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined meshes. An important aspect of the main convergence result is that translation, rotation, scaling, and shearing of the successive grids are allowed.


Journal of Optimization Theory and Applications | 2002

A convergent variant of the Nelder-Mead algorithm

C. J. Price; I. D. Coope; D. Byatt

The Nelder–Mead algorithm (1965) for unconstrained optimization has been used extensively to solve parameter estimation and other problems. Despite its age, it is still the method of choice for many practitioners in the fields of statistics, engineering, and the physical and medical sciences because it is easy to code and very easy to use. It belongs to a class of methods which do not require derivatives and which are often claimed to be robust for problems with discontinuities or where the function values are noisy. Recently (1998), it has been shown that the method can fail to converge or converge to nonsolutions on certain classes of problems. Only very limited convergence results exist for a restricted class of problems in one or two dimensions. In this paper, a provably convergent variant of the Nelder–Mead simplex method is presented and analyzed. Numerical results are included to show that the modified algorithm is effective in practice.


Journal of Optimization Theory and Applications | 2000

Frame based methods for unconstrained optimization

I. D. Coope; C. J. Price

This paper describes a wide class of direct search methods for unconstrained optimization, which make use of fragments of grids called frames. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.


Siam Journal on Optimization | 2003

Frames and Grids in Unconstrained and Linearly Constrained Optimization: A Nonsmooth Approach

C. J. Price; I. D. Coope

This paper describes a class of frame-based direct search methods for unconstrained and linearly constrained optimization. A template is described and analyzed using Clarkes\break nonsmooth calculus. This provides a unified and simple approach to earlier results for grid- and frame-based methods, and also provides partial convergence results when the objective function is not smooth, undefined in some places, or both. The template also covers many new methods which combine elements of previous ideas using frames and grids. These new methods include grid-based simple descent algorithms which allow moving to points off the grid at every iteration and can automatically control the grid size, provided function values are available. The concept of a grid is also generalized to that of an admissible set, which allows sets, for example, with circular symmetries. The method is applied to linearly constrained problems using a simple barrier approach.


Computational Optimization and Applications | 2002

Positive Bases in Numerical Optimization

I. D. Coope; C. J. Price

The theory of positive bases introduced by C. Davis in 1954 does not appear in most modern texts on linear algebra but has re-emerged in publications in optimization journals. In this paper some simple properties of this highly useful theory are highlighted and applied to both theoretical and practical aspects of the design and implementation of numerical algorithms for nonlinear optimization.


Computational Optimization and Applications | 1996

Numerical experiments in semi-infinite programming

C. J. Price; I. D. Coope

A quasi-Newton algorithm for semi-infinite programming using an L∞ exact penalty function is described, and numerical results are presented. Comparisons with three Newton algorithms and one other quasi-Newton algorithm show that the algorithm is very promising in practice.


Journal of Non-newtonian Fluid Mechanics | 2016

An accelerated dual proximal gradient method for applications in viscoplasticity

Timm Treskatis; Miguel Moyers-Gonzalez; C. J. Price

Abstract We present a very simple and fast algorithm for the numerical solution of viscoplastic flow problems without prior regularisation. Compared to the widespread alternating direction method of multipliers (ADMM / ALG2), the new method features three key advantages: firstly, it accelerates the worst-case convergence rate from O ( 1 / k ) to O(1/k), where k is the iteration counter. Secondly, even for nonlinear constitutive models like those of Casson or Herschel–Bulkley, no nonlinear systems of equations have to be solved in the subproblems of the algorithm. Thirdly, there is no need to augment the Lagrangian, which eliminates the difficulty of choosing a penalty parameter heuristically. In this paper, we transform the usual velocity-based formulation of viscoplastic flow problems to a dual formulation in terms of the stress. For the numerical solution of this dual problem we apply FISTA, an accelerated first-order optimisation algorithm from the class of so-called proximal gradient methods. Finally, we conduct a series of numerical experiments, focussing on stationary flow in two-dimensional square cavities. Our results confirm that Algorithm FISTA*, the new dual-based FISTA, outperforms state-of-the-art algorithms such as ADMM / ALG2 by several orders of magnitude. We demonstrate how this speedup can be exploited to identify the free boundary between yielded and unyielded regions with previously unknown accuracy. Since the accelerated algorithm relies solely on Stokes-type subproblems and nonlinear function evaluations, existing code based on augmented Lagrangians would require only few minor adaptations to obtain an implementation of FISTA*.


Journal of Optimization Theory and Applications | 2003

Frame-Based Ray Search Algorithms in Unconstrained Optimization

C. J. Price; I. D. Coope

This paper describes a class of frame-based direct search methods for unconstrained optimization without derivatives. A template for convergent direct search methods is developed, some requiring only the relative ordering of function values. At each iteration, the template considers a number of search steps which form a positive basis and conducts a ray search along a step giving adequate decrease. Various ray search strategies are possible, including discrete equivalents of the Goldstein–Armijo and one-sided Wolfe–Powell ray searches. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.


Bit Numerical Mathematics | 1990

An exact penalty function algorithm for semi-infinite programmes

C. J. Price; I. D. Coope

An algorithm for semi-inifinite programming using sequential quadratic programming techniques together with anL∞ exact penalty function is presented, and global convergence is shown. An important feature of the convergence proof is that it does not require an implicit function theorem to be applicable to the semi-infinite constraints; a much weaker assumption concerning the finiteness of the number of global maximizers of each semi-infinite constraint is sufficient. In contrast to proofs based on an implicit function theorem, this result is also valid for a large class ofC1 problems.


Optimization Methods & Software | 2006

Exploiting problem structure in pattern search methods for unconstrained optimization

C. J. Price; Philippe L. Toint

A direct search method for unconstrained optimization is described. The method makes use of any partial separability structure that the objective function may have. The method uses successively finer nested grids, and minimizes the objective function over each grid in turn. All grids are aligned with the coordinate directions, which allows the partial separability structure of the objective function to be exploited. This has two advantages: it reduces the work needed to calculate function values at the points required and it provides function values at other points as a free by-product. Numerical results show that using partial separability can dramatically reduce the number of function evaluations needed to minimize a function, in some cases allowing problems with thousands of variables to be solved. Results show that the algorithm is effective on strictly C 1 problems and on a class of non-smooth problems.

Collaboration


Dive into the C. J. Price's collaboration.

Top Co-Authors

Avatar

I. D. Coope

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Reale

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Jennifer Brown

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

D. Byatt

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timm Treskatis

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Les Oxley

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Vicki Cowley

University of Canterbury

View shared research outputs
Researchain Logo
Decentralizing Knowledge