Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian Hall is active.

Publication


Featured researches published by Julian Hall.


Computational Management Science | 2010

Towards a practical parallelisation of the simplex method

Julian Hall

The simplex method is frequently the most efficient method of solving linear programming (LP) problems. This paper reviews previous attempts to parallelise the simplex method in relation to efficient serial simplex techniques and the nature of practical LP problems. For the major challenge of solving general large sparse LP problems, there has been no parallelisation of the simplex method that offers significantly improved performance over a good serial implementation. However, there has been some success in developing parallel solvers for LPs that are dense or have particular structural properties. As an outcome of the review, this paper identifies scope for future work towards the goal of developing parallel implementations of the simplex method that are of practical value.


Computational Optimization and Applications | 2005

Hyper-Sparsity in the Revised Simplex Method and How to Exploit it

Julian Hall; K. I. M. McKinnon

The revised simplex method is often the method of choice when solving large scale sparse linear programming problems, particularly when a family of closely-related problems is to be solved. Each iteration of the revised simplex method requires the solution of two linear systems and a matrix vector product. For a significant number of practical problems the result of one or more of these operations is usually sparse, a property we call hyper-sparsity. Analysis of the commonly-used techniques for implementing each step of the revised simplex method shows them to be inefficient when hyper-sparsity is present. Techniques to exploit hyper-sparsity are developed and their performance is compared with the standard techniques. For the subset of our test problems that exhibits hyper-sparsity, the average speedup in solution time is 5.2 when these techniques are used. For this problem set our implementation of the revised simplex method which exploits hyper-sparsity is shown to be competitive with the leading commercial solver and significantly faster than the leading public-domain solver.


Computers & Chemical Engineering | 1991

Flexible retrofit design of multiproduct batch plants

Roger Fletcher; Julian Hall; W.R. John

Abstract We treat the addition of new equipment to an existing multiproduct batch plant for which new production targets and selling profits have been specified. This optimal retrofit design problem has been considered by Vaselenak et al. (Ind. Engng Chem. Res. 26, 718–728, 1987). Their constraint that new units must be used in the same manner for all products places a restriction on the design which could readily be overcome in practice. We present a mixed-integer nonlinear programming (MINLP) formulation which eliminates this constraint. A series of examples is presented which demonstrate greater profitability for plants designed with our formulation. The examples also bring to fight a further unwanted constraint in the Vaselenak, Grossmann and Westerberg formulation. In their formulation they limit batch size to the smallest unit at a stage, even when that unit is not needed. It is noted that, at the expense of some additional mathematical complexity, our formulation could be enhanced to allow reconnexion of existing units and alternate use of multiple additional units.


Optimization Methods & Software | 2008

Preconditioning indefinite systems in interior point methods for large scale linear optimisation

Ghussoun Al-Jeiroudi; Jacek Gondzio; Julian Hall

Abstract We discuss the use of preconditioned conjugate gradients (CG) method for solving the reduced KKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higher degree of sparsity than the (positive definite) normal equations form. Therefore, we use the CG method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniques which exploit hyper-sparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and applied to medium and large-scale problems from public domain test collections. Computational experience is encouraging.


Computational Optimization and Applications | 2013

Parallel distributed-memory simplex for large-scale stochastic LP problems

Miles Lubin; Julian Hall; Cosmin G. Petra; Mihai Anitescu

We present a parallelization of the revised simplex method for large extensive forms of two-stage stochastic linear programming (LP) problems. These problems have been considered too large to solve with the simplex method; instead, decomposition approaches based on Benders decomposition or, more recently, interior-point methods are generally used. However, these approaches do not provide optimal basic solutions, which allow for efficient hot-starts (e.g., in a branch-and-bound context) and can provide important sensitivity information. Our approach exploits the dual block-angular structure of these problems inside the linear algebra of the revised simplex method in a manner suitable for high-performance distributed-memory clusters or supercomputers. While this paper focuses on stochastic LPs, the work is applicable to all problems with a dual block-angular structure. Our implementation is competitive in serial with highly efficient sparsity-exploiting simplex codes and achieves significant relative speed-ups when run in parallel. Additionally, very large problems with hundreds of millions of variables have been successfully solved to optimality. This is the largest-scale parallel sparsity-exploiting revised simplex implementation that has been developed to date and the first truly distributed solver. It is built on novel analysis of the linear algebra for dual block-angular LP problems when solved by using the revised simplex method and a novel parallel scheme for applying product-form updates.


Mathematical Programming | 2004

The simplest examples where the simplex method cycles and conditions where expand fails to prevent cycling

Julian Hall; K. I. M. McKinnon

Abstract.This paper introduces a class of linear programming examples that cause the simplex method to cycle and that are the simplest possible examples showing this behaviour. The structure of examples from this class repeats after two iterations. Cycling is shown to occur for both the most negative reduced cost and steepest-edge column selection criteria. In addition it is shown that the expand anti-cycling procedure of Gill et al. is not guaranteed to prevent cycling.


Annals of Operations Research | 1998

ASYNPLEX, an asynchronous parallelrevised simplex algorithm

Julian Hall; K. I. M. McKinnon

This paper describes ASYNPLEX, an asynchronous variant of the revised simplex methodwhich is suitable for parallel implementation on a shared memory multiprocessor or MIMDcomputer with fast inter-processor communication. The method overlaps simplex iterationson different processors. Candidates to enter the basis are tentatively selected using reducedcosts which may be out of date. Later, the up-to-date reduced costs of the tentative candidatesare calculated and candidates are either discarded or accepted to enter the basis. The implementationof this algorithm on a Cray T3D is described and results demonstrating significantspeed-up are presented.


parallel computing | 1996

PARSMI, a Parallel Revised Simplex Algorithm Incorporating Minor Iterations and Devex Pricing

Julian Hall; K. I. M. McKinnon

When solving linear programming problems using the revised simplex method, two common variants are the incorporation of minor iterations of the standard simplex method applied to a small subset of the variables and the use of Devex pricing. Although the extra work per iteration which is required when updating Devex weights removes the advantage of using minor iterations in a serial computation, the extra work parallelises readily. An asynchronous parallel algorithm PARSMI is presented in which computational components of the revised simplex method with Devex pricing are either overlapped or parallelism is exploited within them. Minor iterations are used to achieve good load balance and tackle problems caused by limited candidate persistence. Initial computational results for an six-processor implementation on a Cray T3D indicate that the algorithm has a significantly higher iteration rate than an efficient sequential implementation.


Mathematical Programming Computation | 2018

Parallelizing the dual revised simplex method

Qi Huangfu; Julian Hall

This paper introduces the design and implementation of two parallel dual simplex solvers for general large scale sparse linear programming problems. One approach, called PAMI, extends a relatively unknown pivoting strategy called suboptimization and exploits parallelism across multiple iterations. The other, called SIP, exploits purely single iteration parallelism by overlapping computational components when possible. Computational results show that the performance of PAMI is superior to that of the leading open-source simplex solver, and that SIP complements PAMI in achieving speedup when PAMI results in slowdown. One of the authors has implemented the techniques underlying PAMI within the FICO Xpress simplex solver and this paper presents computational results demonstrating their value. In developing the first parallel revised simplex solver of general utility, this work represents a significant achievement in computational optimization.


parallel processing and applied mathematics | 2011

GPU acceleration of the matrix-free interior point method

Edmund Smith; Jacek Gondzio; Julian Hall

The matrix-free technique is an iterative approach to interior point methods (IPM), so named because both the solution procedure and the computation of an appropriate preconditioner require only the results of the operations Ax and ATy, where A is the matrix of constraint coefficients. This paper demonstrates its overwhelmingly superior performance on two classes of linear programming (LP) problems relative to both the simplex method and to IPM with equations solved directly. It is shown that the reliance of this technique on sparse matrix-vector operations enables further, significant performance gains from the use of a GPU, and from multi-core processors.

Collaboration


Dive into the Julian Hall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Huangfu

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miles Lubin

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cosmin G. Petra

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. G. Barioni

Empresa Brasileira de Pesquisa Agropecuária

View shared research outputs
Top Co-Authors

Avatar

Edmund Smith

University of Edinburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge