Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Stathopoulos is active.

Publication


Featured researches published by Andreas Stathopoulos.


Computer Physics Communications | 1994

A Davidson program for finding a few selected extreme eigenpairs of a large, sparse, real, symmetric matrix

Andreas Stathopoulos; Charlotte Froese Fischer

Abstract A program is presented for determining a few selected eigenvalues and their eigenvectors on either end of the spectrum of a large, real, symmetric matrix. Based on the Davidson method, which is extensively used in quantum chemistry/physics, the current implementation improves the power of the original algorithm by adopting several extensions. The matrix-vector multiplication routine that it requires is to be provided by the user. Different matrix formats and optimizations are thus feasible. Examples of an efficient sparse matrix representation and a matrix-vector multiplication are given. Some comparisons with the Lanczos method demonstrate the efficiency of the program.


symposium on the theory of computing | 2000

Smoothing and cleaning up slivers

Herbert Edelsbrunner; Xiang-Yang Li; Gary L. Miller; Andreas Stathopoulos; Dafna Talmor; Shang-Hua Teng; Alper Üngör; Noel J. Walkington

A sliver is a tetrahedron whose four vertices lie close to a plane and whose perpendicular projection to that plane is a convex quadrilateral with no short edge. Slivers axe both undesirable and ubiquitous in 3-dimensional Delaunay triangulations. Even when the point-set is well-spaced, slivers may result. This paper shows that such a point set permits a small perturbation whose Delaunay triangulation contains no slivers. It also gives deterministic algorithms that compute the perturbation of n points in time O(n logn) with one processor and in time O(log n) with O(n) processors.


SIAM Journal on Scientific Computing | 2010

Computing and Deflating Eigenvalues While Solving Multiple Right-Hand Side Linear Systems with an Application to Quantum Chromodynamics

Andreas Stathopoulos; Konstantinos Orginos

We present a new algorithm that computes eigenvalues and eigenvectors of a Hermitian positive definite matrix while solving a linear system of equations with conjugate gradient (CG). Traditionally, all the CG iteration vectors could be saved and recombined through the eigenvectors of the tridiagonal projection matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm capitalizes on the iteration vectors produced by CG to update only a small window of vectors that approximate the eigenvectors. While this window is restarted in a locally optimal way, the CG algorithm for the linear system is unaffected. Yet, in all our experiments, if the window has more than a properly chosen but small number of vectors, it converges to the required eigenvectors at a rate identical to unrestarted Lanczos. After the solution of the linear system, eigenvectors that have not accurately converged can be improved in an incremental fashion by solving additional linear systems. In this case, eigenvectors identified in earlier systems can be used to deflate, and thus accelerate, the convergence of subsequent systems. We have used this algorithm with excellent results in lattice quantum chromodynamics applications, where hundreds of right-hand sides may be needed. Specifically, about 70 eigenvectors are obtained to full accuracy after solving 24 right-hand sides. Deflating these from the large number of subsequent right-hand sides removes the dreaded critical slowdown, where the conditioning of the matrix increases as the quark mass reaches a critical value. Our experiments show almost a constant number of iterations for our method, regardless of quark mass, and speedups of 8 over original CG for light quark masses.


ACM Transactions on Mathematical Software | 2010

PRIMME: preconditioned iterative multimethod eigensolver—methods and software description

Andreas Stathopoulos; James R. McCombs

This article describes the PRIMME software package for solving large, sparse Hermitian standard eigenvalue problems. The difficulty and importance of these problems have increased over the years, necessitating the use of preconditioning and near optimally converging iterative methods. However, the complexity of tuning or even using such methods has kept them outside the reach of many users. Responding to this problem, we have developed PRIMME, a comprehensive package that brings state-of-the-art methods from “bleeding edge” to production, with the best possible robustness, efficiency, and a flexible, yet highly usable interface that requires minimal or no tuning. We describe (1) the PRIMME multimethod framework that implements a variety of algorithms, including the near optimal methods GD+k and JDQMR; (2) a host of algorithmic innovations and implementation techniques that endow the software with its robustness and efficiency; (3) a multilayer interface that captures our experience and addresses the needs of both expert and end users.


SIAM Journal on Scientific Computing | 2001

A Block Orthogonalization Procedure with Constant Synchronization Requirements

Andreas Stathopoulos; Kesheng Wu

A BLOCK ORTHOGONALIZATION PROCEDURE WITH CONSTANT SYNCHRONIZATION REQUIREMENTS ANDREAS STATHOPOULOS AND KESHENG WU y Abstract. We propose an alternative orthonormalization method that computes the orthonor- mal basis from the right singular vectors of a matrix. Its advantage are: a all operations are matrix-matrix multiplications and thus cache-e cient, b only one synchronization point is required in parallel implementations, c could be more stable than Gram-Schmidt. In addition, we consider the problem of incremental orthonormalization where a block of vectors is orthonormalized against a previously orthonormal set of vectors and among itself. We solve this problem by alternating itera- tively between a phase of Gram-Schmidt and a phase of the new method. We provide error analysis and use it to derive bounds on how accurately the two successive orthonormalization phases should be performed to minimize total work performed. Our experiments con rm the favorable numerical behavior of the new method and its e ectiveness on modern parallel computers. Key words. Gram-Schmidt, orthogonalization, Householder, QR factorization, singular value decomposition, Poincare AMS Subject Classi cation. 65F15 1. Introduction. Computing an orthonormal basis from a given set of vectors is a basic computation, common in most scienti c applications. Often, it is also one of the most computationally demanding procedures because the vectors are of large dimension, and because the computation scales as the square of the number of vectors involved. Further, among several orthonormalization techniques the ones that ensure high accuracy are the more expensive ones. In many applications, orthonormalization occurs in an incremental fashion, where a new set of vectors we call this internal set is orthogonalized against a previously orthonormal set of vectors we call this external, and then among themselves. This computation is typical in block Krylov methods, where the Krylov basis is expanded by a block of vectors 12, 11 . It is also typical when certain external orthogonalization constraints have to be applied to the vectors of an iterative method. Locking of converged eigenvectors in eigenvalue iterative methods is such an example 19, 22 . This problem di ers from the classical QR factorization in that the external set of vectors should not be modi ed. Therefore, a two phase process is required; rst orthogonalizing the internal vectors against the external, and second the internal among themselves. Usually, the number of the internal vectors is much smaller than the external ones, and signi cantly smaller than their dimension. Another important di erence is that the accuracy of the R matrix of the QR factorization is not of pri- mary interest, but rather the orthonormality of the produced vectors Q . A variety of orthogonalization techniques exist for both phases. For the external phase, Gram- Schmidt GS and its modi ed version MGS are the most competitive choices. For the internal phase, QR factorization using Householder transformations is the most stable, albeit more expensive method 11 . When the number of vectors is signi - cantly smaller than their dimension, MGS or GS with reorthogonalization are usually preferred. Computationally, MGS, GS and Housholder transformations are based on level 1 or level 2 BLAS operations 15, 9, 8 . These basic kernel computations, dot prod- ucts, vector updates and sometimes matrix-vector operations, cannot fully utilize the Department of Computer Science, College of William and Mary, Williamsburg, Virginia 23187- 8795, [email protected] . y NERSC, Lawrence Berkeley National Laboratory, Berkeley, California 94720, [email protected] .


SIAM Journal on Scientific Computing | 2007

Nearly Optimal Preconditioned Methods for Hermitian Eigenproblems under Limited Memory. Part I: Seeking One Eigenvalue

Andreas Stathopoulos

Large, sparse, Hermitian eigenvalue problems are still some of the most computationally challenging tasks. Despite the need for a robust, nearly optimal preconditioned iterative method that can operate under severe memory limitations, no such method has surfaced as a clear winner. In this research we approach the eigenproblem from the nonlinear perspective, which helps us develop two nearly optimal methods. The first extends the recent Jacobi-Davidson conjugate gradient (JDCG) method to JDQMR, improving robustness and efficiency. The second method, generalized-Davidson+1 (GD+1), utilizes the locally optimal conjugate gradient recurrence as a restarting technique to achieve almost optimal convergence. We describe both methods within a unifying framework and provide theoretical justification for their near optimality. A choice between the most efficient of the two can be made at runtime. Our extensive experiments confirm the robustness, the near optimality, and the efficiency of our multimethod over other state-of-the-art methods.


Bit Numerical Mathematics | 1996

Solution of large eigenvalue problems in electronic structure calculations

Yousef Saad; Andreas Stathopoulos; James R. Chelikowsky; Kesheng Wu; Serdar Ogut

Predicting the structural and electronic properties of complex systems is one of the outstanding problems in condensed matter physics. Central to most methods used in molecular dynamics is the repeated solution of large eigenvalue problems. This paper reviews the source of these eigenvalue problems, describes some techniques for solving them, and addresses the difficulties and challenges which are faced. Parallel implementations are also discussed.


Journal of Computational and Applied Mathematics | 1995

Robust preconditioning of large, sparse, symmetric eigenvalue problems

Andreas Stathopoulos; Yousef Saad; Charlotte Froese Fischer

Abstract Iterative methods for solving large, sparse, symmetric eigenvalue problems often encounter convergence difficulties because of ill-conditioning. The generalized Davidson method is a well-known technique which uses eigenvalue preconditioning to surmount these difficulties. Preconditioning the eigenvalue problem entails more subtleties than for linear systems. In addition, the use of an accurate conventional preconditioner (i.e., as used in linear systems) may cause deterioration of convergence or convergence to the wrong eigenvalue. The purpose of this paper is to assess the quality of eigenvalue preconditioning and to propose strategies to improve robustness. Numerical experiments for some ill-conditioned cases confirm the robustness of the approach.


Computing in Science and Engineering | 2000

Parallel methods and tools for predicting material properties

Andreas Stathopoulos; Serdar Ogut; Yousef Saad; James R. Chelikowsky; Hanchul Kim

The authors present a parallel implementation of an electronic-structure application on the Cray T3D and T3E. This implementation has enabled the authors to perform some breakthrough calculations, for example, predicting the optical properties of systems on the order of 1,000 atoms from first principles.


Physica Status Solidi B-basic Solid State Physics | 2000

Electronic Structure Methods for Predicting the Properties of Materials: Grids in Space

James R. Chelikowsky; Yousef Saad; Serdar Ogut; Igor Vasiliev; Andreas Stathopoulos

If the electronic structure of a given material is known, then many physical and chemical propertics can be accurately determined without resorting to experiment. However, determining the electronic structure of a realistic material is a difficult numerical problem. The chief obstacle faced by computational materials and computer scientists is obtaining a highly accurate solution to a complex eigenvalue problem. We illustrate a new numerical method for calculating the electronic structure of materials. The method is based on discretizing the pseudopotential density functional method (PDFM) in real space. The eigenvalue problem within this method can involve large, sparse matrices with up to thousands of eigenvalues required. An efficient and accurate solution depends increasingly on complex data structures that reduce memory and time requirements, and on parallel computing. This approach has many advantages over traditional plane wave solutions, e.g., no fast Fast Fourier Transforms (FFTs) are needed and, consequently, the method is easy to implement on parallel platforms. We demonstrate this approach for localized systems such as atomic clusters.

Collaboration


Dive into the Andreas Stathopoulos's collaboration.

Top Co-Authors

Avatar

Yousef Saad

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Kesheng Wu

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James R. Chelikowsky

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Tran Mills

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charlotte Froese Fischer

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Serdar Ogut

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge