Alex Pothen
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Pothen.
SIAM Journal on Matrix Analysis and Applications | 1990
Alex Pothen; Horst D. Simon; Kang-Pu Liou
The problem ofcomputing a small vertex separator in a graph arises in the context ofcomputing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors ofpath graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorithms for computing separators. Finally, the time required to compute the Laplacian eigenvector is reported, and the accuracy with which the eigenvector must be computed to obtain good separators is considered. The spectral algorithm has the advantage that it can be implemented on a mediumsize multiprocessor in a straightforward manner. Key words, graph partitioning, graph spectra, Laplacian matrix, ordering algorithms, parallel orderings, sparse matrix, vertex separator AMS(MOS) subject classifications. 65F50, 65F05, 65F15, 68R10
ACM Transactions on Mathematical Software | 1990
Alex Pothen; Chin-Ju Fan
We consider the problem of permuting the rows and columns of a rectangular or square, unsymmetric sparse matrix to compute its block triangular form. This block triangular form is based on a canonical decomposition of bipartite graphs induced by a maximum matching and was discovered by Dulmage and Mendelsohn. We describe implementations of algorithms to compute the block triangular form and provide computational results on sparse matrices from test collections. Several applications of the block triangular form are also included.
Siam Review | 2005
Assefaw Hadish Gebremedhin; Fredrik Manne; Alex Pothen
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertex coloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrix estimation problems. The framework is based upon the viewpoint that a partition of a matrix into structurally orthogonal groups of columns corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrix as an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
SIAM Journal on Scientific Computing | 2000
David Hysom; Alex Pothen
We describe a parallel algorithm for computing incomplete factor (ILU) preconditioners. The algorithm attains a high degree of parallelism through graph partitioning and a two-level ordering strategy. Both the subdomains and the nodes within each subdomain are ordered to preserve concurrency. We show through an algorithmic analysis and through computational results that this algorithm is scalable. Experimental results include timings on three parallel platforms for problems with up to 20 million unknowns running on up to 216 processors. The resulting preconditioned Krylov solvers have the desirable property that the number of iterations required for convergence is insensitive to the number of processors.
Archive | 1997
Alex Pothen
Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a graph partitioning problem in which the vertices of a graph are divided into a specified number of subsets such that few edges join two vertices in different subsets. Several new graph partitioning algorithms have been developed in the past few years, and we survey some of this activity. We describe the terminology associated with graph partitioning, the complexity of computing good separators, and graphs that have good separators. We then discuss early algorithms for graph partitioning, followed by three new algorithms based on geometric, algebraic, and multilevel ideas. The algebraic algorithm relies on an eigenvector of a Laplacian matrix associated with the graph to compute the partition. The algebraic algorithm is justified by formulating graph partitioning as a quadratic assignment problem. We list several papers that describe applications of graph partitioning to parallel scientific computing and other applications.
Siam Journal on Algebraic and Discrete Methods | 1987
Thomas F. Coleman; Alex Pothen
The Null Space Problem is that of finding a sparsest basis for the null space (null basis) of a
BMC Bioinformatics | 2004
Michael Wagner; Dayanand N. Naik; Alex Pothen; Srinivas Kasukurti; Raghu Ram Devineni; Bao-Ling Adam; O. John Semmes; George L. Wright
t \times n
Siam Journal on Algebraic and Discrete Methods | 1986
Thomas F. Coleman; Alex Pothen
matrix of rank
Siam Journal on Scientific and Statistical Computing | 1989
John G. Lewis; Barry W. Peyton; Alex Pothen
t
conference on high performance computing (supercomputing) | 1992
Alex Pothen; Horst D. Simon; Lie Wang; Stephen T. Barnard
. This problem was shown to be NP-hard in Coleman and Pothen (1985). In this paper we develop heuristic algorithms to find sparse null bases. These algorithms have two phases: In the first combinatorial phase, a minimal dependent set of columns is identified by finding a matching in the bipartite graph of the matrix. In the second numerical phase, a null vector is computed from this dependent set. We describe an implementation of our algorithms and provide computational results on several large sparse constraint matrices from linear programs. One of our algorithms compares favorably with previously reported algorithms in sparsity of computed null bases and in running times. Unlike the latter, our algorithm does not require any intermediate dense matrix storage. This advantage should make our algorithm an attractive candidate for large sparse null basis computations. A matching based algorithm is designed to find orthogonal null bases, but we present some theoretical evidence that such bases are unlikely to be sparse. Finally, we show how sparsest orthogonal null bases may be found for an