Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barry W. Peyton is active.

Publication


Featured researches published by Barry W. Peyton.


Archive | 1993

An Introduction to Chordal Graphs and Clique Trees

Jean R. S. Blair; Barry W. Peyton

Clique trees and chordal graphs have carved out a niche for themselves in recent work on sparse matrix algorithms, due primarily to research questions associated with advanced computer architectures. This paper is a unified and elementary introduction to the standard characterizations of chordal graphs and clique trees. The pace is leisurely, as detailed proofs of all results are included. We also briefly discuss applications of chordal graphs and clique trees in sparse matrix computations.


Siam Review | 1991

Parallel algorithms for sparse linear systems

Michael T. Heath; Esmond G. Ng; Barry W. Peyton

This paper surveys recent progress in the development of parallel algorithms for solving sparse linear systems on computer architectures having multiple processors. Attention is focused on direct methods for solving sparse symmetric positive definite systems, specifically by Cholesky factorization. Recent progress on parallel algorithms is surveyed for all phases of the solution process, including ordering, symbolic factorization, numeric factorization, and triangular solution.


SIAM Journal on Scientific Computing | 1993

Block sparse Cholesky algorithms on advanced uniprocessor computers

Esmond G. Ng; Barry W. Peyton

As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting the attention to machines with only one processor, as has been done in this paper, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. This approach is taken for sparse factorization as well.This paper has two primary goals. First, two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, are examined in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches, Second, the impact of various implementation techniques on time and storage efficiency is assessed, paying particularly clo...


ieee international conference on high performance computing data and analytics | 1987

Progress in Sparse Matrix Methods for Large Linear Systems On Vector Supercomputers

C. Cleveland Ashcraft; Roger G. Grimes; John G. Lewis; Barry W. Peyton; Horst D. Simon; Petter E. Bjørstad

This paper summarizes progress in the use of direct methods for solving very large sparse symmetric positive definite systems of linear equations on vector supercomputers. Sparse di rect solvers based on the multifrontal method or the general sparse method now outperform band or envelope solvers on vector supercomputers such as the CRAY X-MP. This departure from conventional wisdom is due to several advances. The hardware gather/scatter feature or indirect address feature of some recent vector super computers permits vectorization of the general sparse factorization. Other advances are algo rithmic. The new multiple minimum degree algo rithm calculates a powerful ordering much faster than its predecessors. Exploiting the supernode structure of the factored matrix provides vectori zation over nested loops, giving greater speed in the factorization module for the multifrontal and general sparse methods. Out-of-core versions of both methods are now available. Numerical re sults on the CRAY X-MP for several structural engineering problems demonstrate the impact of these improvements.


Siam Journal on Scientific and Statistical Computing | 1989

A fast algorithm for reordering sparse matrices for parallel factorization

John G. Lewis; Barry W. Peyton; Alex Pothen

Jess and Kees [IEEE Trans. Comput., C-31 (1982), pp. 231–239] introduced a method for ordering a sparse symmetric matrix A for efficient parallel factorization. The parallel ordering is computed in two steps. First, the matrix A is ordered by some fill-reducing ordering. Second, a parallel ordering of A is computed from the filled graph that results from symbolically factoring A using the initial fill-reducing ordering. Among all orderings whose fill lies in the filled graph, this parallel ordering achieves the minimum number of parallel steps in the factorization of A. Jess and Kees did not specify the implementation details of an algorithm for either step of this scheme. Liu and Mirzaian [SIAM J. Discrete Math., 2 (1989), pp. 100–107] designed an algorithm implementing the second step, but it has time and space requirements higher than the cost of computing common fill-reducing orderings.A new fast algorithm that implements the parallel ordering step by exploiting the clique tree representation of a chordal graph is presented. The cost of the parallel ordering step is reduced well below that of the fill-reducing step. This algorithm has time and space complexity linear in the number of compressed subscripts for L, i.e., the sum of the sizes of the maximal cliques of the filled graph. Running times nearly identical to Lius heuristic composite rotations algorithm, which approximates the minimum number of parallel steps, are demonstrated empirically.


SIAM Journal on Matrix Analysis and Applications | 1993

On finding supernodes for sparse matrix computations

Joseph W. H. Liu; Esmond G. Ng; Barry W. Peyton

A simple characterization of fundamental supernodes is given in terms of the row subtrees of sparse Cholesky factors in the elimination tree. Using this characterization, an efficient algorithm is presented that determines the set of such supernodes in time proportional to the number of nonzeros and equations in the original matrix. Experimental results verify the practical efficiency of this algorithm.


Algorithmica | 2004

Maximum Cardinality Search for Computing Minimal Triangulations of Graphs

Anne Berry; Jean R. S. Blair; Pinar Heggernes; Barry W. Peyton

Abstract We present a new algorithm, called MCS-M, for computing minimal triangulations of graphs. Lex-BFS, a seminal algorithm for recognizing chordal graphs, was the genesis for two other classical algorithms: LEX M and MCS. LEX M extends the fundamental concept used in Lex-BFS, resulting in an algorithm that not only recognizes chordality, but also computes a minimal triangulation of an arbitrary graph. MCS simplifies the fundamental concept used in Lex-BFS, resulting in a simpler algorithm for recognizing chordal graphs. The new algorithm MCS-M combines the extension of LEX M with the simplification of MCS, achieving all the results of LEX M in the same time complexity.


SIAM Journal on Scientific Computing | 1993

A supernodal Cholesky factorization algorithm for shared-memory multiprocessors

Esmond G. Ng; Barry W. Peyton

This paper presents a parallel sparse Cholesky factorization algorithm for shared-memory MIMD multiprocessors. The algorithm is particularly well suited for vector supercomputers with multiple processors, such as the Cray Y-MP. The new algorithm is a straightforward parallelization of the left-looking supernodal sparse Cholesky factorization algorithm. Like its sequential predecessor, it improves performance by reducing indirect addressing and memory traffic. Experimental results on a Cray Y-MP demonstrate the effectiveness of the new algorithm. On eight processors of a Cray Y-MP, the new routine performs the factorization at rates exceeding one Gflop for several test problems from the Harwell–Boeing sparse matrix collection.


SIAM Journal on Matrix Analysis and Applications | 2001

Minimal Orderings Revisited

Barry W. Peyton

When minimum orderings proved too difficult to deal with, Rose, Tarjan, and Lueker instead studied minimal orderings and how to compute them [SIAM J. Comput., 5 (1976), pp. 266--283]. This paper introduces an algorithm that is capable of computing much better minimal orderings much more efficiently than the algorithm of Rose, Tarjan, and Lueker. The new insight is a way to use certain structures and concepts from modern sparse Cholesky solvers to reexpress one of the basic results of Rose, Tarjan, and Lueker. The new algorithm begins with any initial ordering and then refines it until a minimal ordering is obtained. It is simple to obtain high-quality low-cost minimal orderings by using fill-reducing heuristic orderings as initial orderings for the algorithm. We examine several such initial orderings in some detail. Our results here and previous work by others indicate that the improvements obtained over the initial heuristic orderings are relatively small because the initial orderings are minimal or nearly minimal. Nested dissection orderings provide some significant exceptions to this rule.


Nordic Journal of Computing | 1994

On finding minimum-diameter clique trees

Jean R. S. Blair; Barry W. Peyton

A clique-tree representation of a chordal graph often reduces the size of the data structure needed to store the graph, permitting the use of extremely efficient algorithms that take advantage of the compactness of the representation. Since some chordal graphs have many distinct clique-tree representations, it is interesting to consider which one is most desirable under various circumstances. A clique tree of minimum diameter (or height) is sometimes a natural candidate when choosing clique trees to be processed in a parallel-computing environment. This paper introduces a linear-time algorithm for computing a minimum-diameter clique tree.

Collaboration


Dive into the Barry W. Peyton's collaboration.

Top Co-Authors

Avatar

Esmond G. Ng

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean R. S. Blair

United States Military Academy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Al Geist

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Roger G. Grimes

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Anne Berry

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge