Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David S. Scott is active.

Publication


Featured researches published by David S. Scott.


Siam Journal on Scientific and Statistical Computing | 1986

Generalizations of Davidson's method for computing eigenvalues of sparse symmetric matrices

Ronald B. Morgan; David S. Scott

This paper analyzes Davidson’s method for computing a few eigenpairs of large sparse symmetric matrices. An explanation is given for why Davidson’s method often performs well but occasionally performs very badly. Davidson’s method is then generalized to a method which offers a powerful way of applying preconditioning techniques developed for solving systems of linear equations to solving eigenvalue problems.


SIAM Journal on Scientific Computing | 1993

Preconditioning the Lanczos algorithm for sparse symmetric eigenvalue problems

Ronald B. Morgan; David S. Scott

A method for computing a few eigenpairs of sparse symmetric matrices is presented and analyzed that combines the power of preconditioning techniques with the efficiency of the Lanczos algorithm. The method is related to Davidson’s method and its generalizations, but can be less expensive for matrices that are fairly sparse. A double iteration is used. An effective termination criterion is given for the inner iteration. Quadratic convergence with respect to the outer loop is shown.


international conference on parallel processing | 1996

A TeraFLOP supercomputer in 1996: the ASCI TFLOP system

Timothy G. Mattson; David S. Scott; Stephen R. Wheat

To maintain the integrity of the US nuclear stockpile without detonating nuclear weapons, the DOE needs the results of computer-simulations that overwhelm the worlds most powerful supercomputers. Responding to this need, the US Department of Energy (DOE) initiated the Accelerated Strategic Computing Initiative (ASCI). This program accelerates the development of new scalable supercomputers resulting in a TeraFLOP computer before the end of 1996. In September 1995, DOE announced that it would work with Intel Corporation to build the ASCI TFLOP supercomputer. This system would use commodity commercial off-the-shelf (CCOTS) components to keep the price under control and would contain over 9000 Intel Pentium Pro processors. In this paper, we describe the hardware and software design of this supercomputer.


Communications of The ACM | 1986

TID—a translation invariant data structure for storing images

David S. Scott; S. Sitharama Iyengar

There are a number of techniques for representing pictorial information, among them are borders, arrays, and skeletons. Quadtrees are often used to store black and white picture information. A variety of techniques have been suggested for improving quadtrees, including linear quadtrees, QMATs (quadtree medial axis transform), forests of quadtrees, etc. The major purpose of these improvements is to reduce the storage required without greatly increasing the processing costs. All of these methods suffer from the fact that the structure of the underlying quadtree can be very sensitive to the placement of the origin. In this paper we discuss a translation invariant data structure (which we name TID) for storing and processing images based on the medial axis transform of the image that consists of all the maximal black squares contained in the image. We also discuss the performance of TID with other existing structures such as QMATs, forests of quadtrees, and normalized quadtrees. Some discussion on the union and intersection of images using TID is included.


Journal of Approximation Theory | 1984

The complexity of interpolating given data in three space with a convex function of two variables

David S. Scott

Abstract Three questions concerning the interpolation of a set ( x i , y i , z i ), i = 1,…, N , in R 3 by a convex function of two variables ( z = f ( x , y )) are examined. A given set of N points can be interpolated by a convex function if and only if it can be interpolated by a convex piecewise planar function. Every possible piecewise planar interpolation of the data is determined by a triangulation of the ( x , y ) points in the plane. An algorithm is presented which builds up an acceptable triangulation by sequentially adding points. The algorithm either terminates as soon as it discovers that the interpolation is impossible or terminates with the desired triangulation. Numerical experiments are presented which indicate the effectiveness of the algorithm. Suppose that the points in the plane, ( x i , y i ), i = 1,…, N , are fixed and it is desired to minimize a function f ( z 1 , z 2 ,…, z N ) subject to the constraint that the triples ( x i , y i , z i ) can be interpolated by a convex function. Convexity can be represented by a set of linear inequality constraints among the z s but many of these constraints may be redundant. For efficiency it is important to reduce the list to the independant constraints and some minimization algorithms actually require independent constraints. An efficient algorithm for generating the set of independent linear inequalities is given. Finally it is shown that the number of independent constraints depends on the location of the ( x , y ) points and varies from zero to O ( N 3 ). It is conjectured that the expected number of independent constraints for N points chosen randomly from a uniform distribution on a square is O ( N 2 log N ). Both theoretical and numerical justification for the conjecture are given. Finally it is shown that there are O ( N 2 ) independent constraints when the points are arranged in a square (or triangular) lattice.


Linear Algebra and its Applications | 1983

On the Accuracy of the Gerschgorin Circle Theorem for Bounding the Spread Ofa Symmetric Matrix

David S. Scott

Abstract The spread of a matrix with real eigenvalues is the difference between its largest and smallest eigenvalues. The Gerschgorin circle theorem can be used to bound the extreme eigenvalues of the matrix and hence its spread. For nonsymmetric matrices the Gerschgorin bound on the spread may be larger by an arbitrary factor than the actual spread even if the matrix is symmetrizable. This is not true for real symmetric matrices. It is shown that for real symmetric matrices (or complex Hermitian matrices) the ratio between the bound and the spread is bounded by p+1 , where p is the maximum number of off diagonal nonzeros in any row of the matrix. For full matrices this is just n . This bound is not quite sharp for n greater than 2, but examples with ratios of n−1 for all n are given. For banded matrices with m nonzero bands the maximum ratio is bounded by m independent of the size of n. This bound is sharp provided only that n is at least 2m. For sparse matrices, p may be quite small and the Gerschgorin bound may be surprisingly accurate.


Linear Algebra and its Applications | 1985

Parallel Block Jacobi Eigenvalue Algorithms Using Systolic Arrays

David S. Scott; Michael T. Heath; Robert C. Ward

Abstract Systolic arrays have become established in principle, if not yet in practice, as a way of increasing computational speed for problems in linear algebra, such as computing eigenvalues, by exploiting low-level parallelism. Any systolic device that is actually implemented in silicon will necessarily have an upper limit on the size of problem it can solve. In this paper we consider the questions of whether such a systolic device for symmetric eigenvalue problems can be used to solve larger problems, and if so, whether such use would place any constraints on the design of the underlying systolic array. We will see that the answer to both questions is yes, although the restrictions on the systolic array are mild. Our approach is to partition a large matrix into blocks that are small enough to be processed by the systolic device, and a Jacobi-like process is then carried out at the block level. If multiple copies of the systolic device are available, then this block Jacobi process can itself be implemented with a higher-level parallelism. To make any progress it is necessary to reorder the matrix between calls to the systolic arrays. We consider several possible reordering strategies.


Siam Journal on Scientific and Statistical Computing | 1982

Solving Symmetric-Definite Quadratic

David S. Scott; Robert C. Ward

Algorithms are presented for computing some of the eigenvalues and their associated eigenvectors of the quadratic


Communications in Statistics-theory and Methods | 1987

\lambda

Toby J. Mitchell; David S. Scott

\lambda


Computer Physics Communications | 1989

-Matrix Problems without Factorization

David S. Scott

-matrix

Collaboration


Dive into the David S. Scott's collaboration.

Top Co-Authors

Avatar

Robert C. Ward

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Sitharama Iyengar

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Stephen R. Wheat

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

A.K. Rajagopal

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Mohan Yellayi

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Robert E. Wyatt

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge