Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John A. Ford is active.

Publication


Featured researches published by John A. Ford.


Engineering Computations | 2004

Hybrid estimation of distribution algorithm for global optimization

Qingfu Zhang; Jianyong Sun; Edward P. K. Tsang; John A. Ford

This paper introduces a new hybrid evolutionary algorithm (EA) for continuous global optimization problems, called estimation of distribution algorithm with local search (EDA/L). Like other EAs, EDA/L maintains and improves a population of solutions in the feasible region. Initial candidate solutions are generated by uniform design, these solutions evenly scatter over the feasible solution region. To generate a new population, a marginal histogram model is built based on the global statistical information extracted from the current population and then new solutions are sampled from the model thus built. The incomplete simplex method applies to every new solution generated by uniform design or sampled from the histogram model. Unconstrained optimization by diagonal quadratic approximation applies to several selected resultant solutions of the incomplete simplex method at each generation. We study the effectiveness of main components of EDA/L. The experimental results demonstrate that EDA/L is better than four other recent EAs in terms of the solution quality and the computational cost.


Journal of Computational and Applied Mathematics | 1994

Multi-step quasi-Newton methods for optimization

John A. Ford; I.A. Moghrabi

Abstract Quasi-Newton methods update, at each iteration, the existing Hessian approximation (or its inverse) by means of data deriving from the step just completed. We show how “multi-step” methods (employing, in addition, data from previous iterations) may be constructed by means of interpolating polynomials, leading to a generalization of the “secant” (or “quasi-Newton”) equation. The issue of positive-definiteness in the Hessian approximation is addressed and shown to depend on a generalized version of the condition which is required to hold in the original “single-step” methods. The results of extensive numerical experimentation indicate strongly that computational advantages can accrue from such an approach (by comparison with “single-step” methods), particularly as the dimension of the problem increases.


Siam Journal on Optimization | 2011

A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

Yasushi Narushima; Hiroshi Yabe; John A. Ford

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed method. Moreover, we present a specific three-term conjugate gradient method based on the multistep quasi-Newton method. Finally, some numerical results of the proposed method are given.


Annals of Operations Research | 2003

Applying an Extended Guided Local Search to the Quadratic Assignment Problem

Patrick Mills; Edward P. K. Tsang; John A. Ford

In this paper, we show how an extended Guided Local Search (GLS) can be applied to the Quadratic Assignment Problem (QAP). GLS is a general, penalty-based meta-heuristic, which sits on top of local search algorithms, to help guide them out of local minima. We present empirical results of applying several extended versions of GLS to the QAP, and show that these extensions can improve the range of parameter settings within which Guided Local Search performs well. Finally, we compare the results of running our extended GLS with some state of the art algorithms for the QAP.


Archive | 2006

Estimation of Distribution Algorithm with 2-opt Local Search for the Quadratic Assignment Problem

Qingfu Zhang; Jianyong Sun; Edward P. K. Tsang; John A. Ford

This chapter proposes a combination of estimation of distribution algorithm (EDA) and the 2-opt local search algorithm (EDA/LS) for the quadratic assignment problem (QAP). In EDA/LS, a new operator, called guided mutation, is employed for generating new solutions. This operator uses both global statistical information collected from the previous search and the location information of solutions found so far. The 2-opt local search algorithm is applied to each new solution generated by guided mutation. A restart strategy based on statistical information is used when the search is trapped in a local area. Experimental results on a set of QAP test instances show that EDA/LS is comparable with the memetic algorithm of Merz and Freisleben and outperforms estimation of distribution algorithm with guided local search (EDA/GLS). The proximate optimality principle on the QAP is verified experimentally to justify the rationale behind heuristics (including EDA/GLS) for the QAP.


Journal of Computational and Applied Mathematics | 1996

Using function-values in multi-step quasi-Newton methods

John A. Ford; I.A. Moghrabi

In previous work, the authors (1993, 1994) developed the concept of multi-step quasi-Newton methods, based on the use of interpolating polynomials determined by data from the m most recent steps. Different methods for parametrizing these polynomials were studied by the authors (1993), and several methods were shown (empirically) to yield substantial gains over the standard (one-step) BFGS method for unconstrained optimization. In this paper, we will consider the issue of how to incorporate function-value information within the framework of such multi-step methods. This is achieved, in the case of two-step methods, through the use of a carefully chosen rational form to interpolate the three most recent iterates. The results of numerical experiments on the new methods are reported.


Journal of Computational and Applied Mathematics | 1997

Alternating multi-step quasi-Newton methods for unconstrained optimization

John A. Ford; I.A. Moghrabi

We consider multi-step quasi-Newton methods for unconstrained optimization. These methods were introduced by the authors (Ford and Moghrabi [5, 6, 8]), who showed how an interpolating curve in the variable-space could be used to derive an appropriate generalization of the Secant Equation normally employed in the construction of quasi-Newton methods. One of the most successful of these multi-step methods employs the current approximation to the Hessian to determine the parametrization of the interpolating curve and, hence, the derivatives which are required in the generalized updating formula. However, certain approximations were found to be necessary in the process, in order to reduce the level of computation required (which must be repeated at each iteration) to acceptable levels. In this paper, we show how a variant of this algorithm, which avoids the need for such approximations, may be obtained. This is accomplished by alternating, on successive iterations, a single-step and a two-step method. The results of a series of experiments, which show that the new algorithm exhibits a clear improvement in numerical performance, are reported.


Journal of Computational and Applied Mathematics | 1989

On the use of function-values in unconstrained optimisation

John A. Ford; R.-A. Ghandhari

Abstract By the use of a nonlinear model for the gradient of the objective function along a chosen direction, we show how information available via values of the objective function may be efficiently utilised in an optimisation method of “quasi-Newton” type. Numerical experiments indicate that computational gains are possible by such means.


Journal of Computational and Applied Mathematics | 1987

On the construction of minimisation methods of quasi-Newton type

John A. Ford; Adel F. Saadallah

Abstract The secant equation, which underlies all standard ‘quasi-Newton’ minimisation methods, arises from the use of a linear function to model the gradient along a chosen direction. We present new minimisation algorithms, derived by replacing this linear model with a more general one involving a free parameter, which is determined by using information contained in the current approximate Hessian. The use of such a model can give more flexibility in the criteria to be satisfied during the line-search. The new methods can operate as soon as a reasonable approximation to the Hessian has been accumulated and may, in one sense, be viewed as acceleration techniques for quasi-Newton methods.


Annals of Operations Research | 2007

Towards a practical engineering tool for rostering

Edward P. K. Tsang; John A. Ford; Patrick Mills; Richard Bradwell; Richard Williams; Paul D. Scott

Abstract The profitability and morale of many organizations (such as factories, hospitals and airlines) are affected by their ability to schedule their personnel properly. Sophisticated and powerful constraint solvers such as ILOG, CHIP, ECLiPSe, etc. have been demonstrated to be extremely effective on scheduling. Unfortunately, they require non-trivial expertise to use. This paper describes ZDC-rostering, a constraint-based tool for personnel scheduling that addresses the software crisis and fills a void in the space of solvers. ZDC-rostering is easier to use than the above constraint-based solvers and more effective than Microsoft’s Excel Solver. ZDC-rostering is based on an open-source computer-aided constraint programming package called ZDC, which decouples problem formulation (or modelling) from solution generation in constraint satisfaction. ZDC is equipped with a set of constraint algorithms, including Extended Guided Local Search, whose efficiency and effectiveness have been demonstrated in a wide range of applications. Our experiments show that ZDC-rostering is capable of solving realistic-sized and very tightly-constrained problems efficiently. ZDC-rostering demonstrates the feasibility of applying constraint satisfaction techniques to solving rostering problems, without having to acquire deep knowledge in constraint technology.

Collaboration


Dive into the John A. Ford's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qingfu Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroshi Yabe

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Yasushi Narushima

Yokohama National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge