Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth R. Jessup is active.

Publication


Featured researches published by Elizabeth R. Jessup.


Siam Review | 1999

Matrices, Vector Spaces, and Information Retrieval

Michael W. Berry; Zlatko Drmac; Elizabeth R. Jessup

The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a users query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.


SIAM Journal on Matrix Analysis and Applications | 2005

A Technique for Accelerating the Convergence of Restarted GMRES

Allison H. Baker; Elizabeth R. Jessup; Thomas A. Manteuffel

We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.


Siam Journal on Scientific and Statistical Computing | 1990

Solving the symmetric tridiagonal eigenvalues problem on the hypercube

Lise C. F. Ipsen; Elizabeth R. Jessup

This paper describes implementations of Cuppens method, bisection, and multisection for the computation of all eigenvalues and eigenvectors of a real symmetric tridiagonal matrix on a distributed-memory hypercube multiprocessor. Numerical results and timings for Intels iPSC-1 are presented. Cuppens method is found to be the numerically most accurate of three methods, while bisection with inverse iteration is observed experimentally to be the fastest method.


SIAM Journal on Matrix Analysis and Applications | 1994

A Parallel Algorithm for Computing the Singular Value Decomposition of a Matrix

Elizabeth R. Jessup; D. C. Sorensen

A parallel algorithm for computing the singular value decomposition of a matrix is presented. The algorithm uses a divide and conquer procedure based on a rank one modification of a bidiagonal matrix. Numerical difficulties associated with forming the product of a matrix with its transpose are avoided, and numerically stable formulae for obtaining the left singular vectors after computing updated right singular vectors are derived. A deflation technique is described that, together with a robust root finding method, assures computation of the singular values to full accuracy in the residual and also assures orthogonality of the singular vectors.


ieee international conference on high performance computing data and analytics | 2009

Automating the generation of composed linear algebra kernels

Geoffrey Belter; Elizabeth R. Jessup; Ian Karlin; Jeremy G. Siek

Memory bandwidth limits the performance of important kernels in many scientific applications. Such applications often use sequences of Basic Linear Algebra Subprograms (BLAS), and highly efficient implementations of those routines enable scientists to achieve high performance at little cost. However, tuning the BLAS in isolation misses opportunities for memory optimization that result from composing multiple subprograms. Because it is not practical to create a library of all BLAS combinations, we have developed a domain-specific compiler that generates them on demand. In this paper, we describe a novel algorithm for compiling linear algebra kernels and searching for the best combination of optimization choices. We also present a new hybrid analytic/empirical method for quickly evaluating the profitability of each optimization. We report experimental results showing speedups of up to 130% relative to the GotoBLAS on an AMD Opteron and up to 137% relative to MKL on an Intel Core 2.


Siam Journal on Scientific and Statistical Computing | 1992

Improving the accuracy of inverse iteration

Elizabeth R. Jessup; Ilse C. F. Ipsen

The EISPACK routine TINVIT is an implementation of inverse iteration for computing eigenvectors of real symmetric tridiagonal matrices. Experiments have shown that the eigenvectors computed with TINVIT are numerically less accurate than those from implementations of Cuppen’s divide-and-conquer method (TREEQL) and of the QL method (TQL2). The loss of accuracy can be attributed to TINVIT’s choice of starting vectors and to its iteration stopping criterion.This paper introduces a new implementation of TINVIT that computes each eigenvector from a different random starting vector and performs an additional iteration after the stopping criterion is satisfied. A statistical analysis and the results of numerical experiments with matrices of order up to 525 are presented to show that the numerical accuracy of this new implementation is competitive with that of the implementations of the divide-and-conquer and QL methods. The extension of this implementation to larger order matrices is discussed, albeit in less detail.


Physics Today | 1996

An introduction to high-performance scientific computing

Lloyd D. Fosdick; Elizabeth R. Jessup; Carolyn J.C Schauble; Gitta Domik

An overview of scientific computing: introduction, large-scale scientific problems, the scientific computing environment, workstations, supercomputers, further reading. Part 1 Background: a review of selected topics from numerical analysis - notation, error, floating-point numbers, Taylors series, linear algebra, differential equations, fourier series IEEE arithmetic short reference - single precision, double precision, rounding, infinity, NaN, and zero, of things not said, further reading UNIX, vi, and ftp - a quick review - UNIX short reference, vi short reference, ftp short reference elements of UNIX make - introduction, an example of using make, some advantages of make, the makefile, further examples, dynamic macros, user-defined macros, additional features, other examples, a makefile for C, creating your own makefile, futher information, a makefile for fortran modules, a makefile for C modules elements of fortran - introduction, overview, definitions and basic rules, description of statements, reading and writing, examples. Part 2 Tools: elements of matlab - what is MATLAB?, getting started, some examples, short outline of the language, built-in functions, MATLAB scripts and user-defined functions, input/output, graphics, thats it! elements of IDL - getting started, exploring the basic concepts, plotting, programming in IDL, input/output, using IDL efficiently, summary elements of AVS - basic concepts, AVS graphical programming - the Network editor, the geometry viewer, AVS applications, further reading. Part 3 Scientific visualization: scientific visualization - definitions and goals of scientific visualization, history of scientific visualization, example of scientific visualization, concepts of scientific visualization, visual cues, characterization of scientific data, visualization techniques, annotations, interactivity, interpretation goals to pursue with visualization, quantitative versus qualitative data interpretation. Part 4 Architectures: computer performance - introduction and background, computer performance, benchmarks, the effect of optimizing compilers, other architectural factors, vector and parallel computers, summary. (Part contents).


international parallel and distributed processing symposium | 2008

Build to order linear algebra kernels

Jeremy G. Siek; Ian Karlin; Elizabeth R. Jessup

The performance bottleneck for many scientific applications is the cost of memory access inside linear algebra kernels. Tuning such kernels for memory efficiency is a complex task that reduces the productivity of computational scientists. Software libraries such as the Basic Linear Algebra Subprograms (BLAS) ameliorate this problem by providing a standard interface for which computer scientists and hardware vendors have created highly-tuned implementations. Scientific applications often require a sequence of BLAS operations, which presents further opportunities for memory optimization. However, because BLAS are tuned in isolation they do not take advantage of these opportunities. This phenomenon motivated the recent addition to the BLAS of several routines that perform sequences of operations. Unfortunately, the exact sequence of operations needed in a given situation is highly application dependent, so many more routines are needed. In this paper we present preliminary work on a domain- specific compiler that generates implementations for arbitrary sequences of basic linear algebra operations and tunes them for memory efficiency. We report experimental results for dense kernels and show speedups of 25 % to 120 % relative to sequences of calls to GotoBLAS and vendor-tuned BLAS on Intel Xeon and IBM PowerPC platforms.


SIAM Journal on Scientific Computing | 2005

On Improving Linear Solver Performance: A Block Variant of GMRES

Allison H. Baker; John M. Dennis; Elizabeth R. Jessup

The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system


Applied Numerical Mathematics | 1993

A case against a divide and conquer approach to the nonsymmetric eigenvalue problem

Elizabeth R. Jessup

Ax=b

Collaboration


Dive into the Elizabeth R. Jessup's collaboration.

Top Co-Authors

Avatar

Silvia A. Crivelli

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Allison H. Baker

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Dennis

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Ian Karlin

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoffrey Belter

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lloyd D. Fosdick

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Bruce Hendrickson

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge