Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew V. Knyazev is active.

Publication


Featured researches published by Andrew V. Knyazev.


SIAM Journal on Scientific Computing | 2001

Principal Angles between Subspaces in an A -Based Scalar Product: Algorithms and Perturbation Estimates

Andrew V. Knyazev; Merico E. Argentati

Computation of principal angles between subspaces is important in many applications, e.g., in statistics and information retrieval. In statistics, the angles are closely related to measures of dependency and covariance of random variables. When applied to column-spaces of matrices, the principal angles describe canonical correlations of a matrix pair. We highlight that all popular software codes for canonical correlations compute only cosine of principal angles, thus making impossible, because of round-off errors, finding small angles accurately. We review a combination of sine and cosine based algorithms that provide accurate results for all angles. We generalize the method to the computation of principal angles in an A-based scalar product for a symmetric and positive definite matrix A. We provide a comprehensive overview of interesting properties of principal angles. We prove basic perturbation theorems for absolute errors for sine and cosine of principal angles with improved constants. Numerical examples and a detailed description of our code are given.


Computational Materials Science | 2008

Large scale ab initio calculations based on three levels of parallelization

François Bottin; Stéphane Leroux; Andrew V. Knyazev; Gilles Zerah

Abstract We suggest and implement a parallelization scheme based on an efficient multiband eigenvalue solver, called the locally optimal block preconditioned conjugate gradient ( lobpcg ) method, and using an optimized three-dimensional (3D) fast Fourier transform (FFT) in the ab initio plane-wave code abinit . In addition to the standard data partitioning over processors corresponding to different k-points, we introduce data partitioning with respect to blocks of bands as well as spatial partitioning in the Fourier space of coefficients over the plane waves basis set used in abinit . This k-points-multiband-FFT parallelization avoids any collective communications on the whole set of processors relying instead on one-dimensional communications only. For a single k-point, super-linear scaling is achieved for up to 100 processors due to an extensive use of hardware-optimized blas , lapack and scalapack routines, mainly in the lobpcg routine. We observe good performance up to 200 processors. With 10 k-points our three-way data partitioning results in linear scaling up to 1000 processors for a practical system used for testing.


Linear Algebra and its Applications | 2001

A Geometric Theory for Preconditioned Inverse Iteration. III:A Short and Sharp Convergence Estimate for Generalized EigenvalueProblems.

Andrew V. Knyazev; Klaus Neymeyr

In two previous papers by Neymeyr: A geometric theory for preconditioned inverse iteration I: Extrema of the Rayleigh quotient, LAA 322: (1-3), 61-85, 2001, and A geometric theory for preconditioned inverse iteration II: Convergence estimates, LAA 322: (1-3), 87-104, 2001, a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant, but still sharp in decisive quantities, convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such practically important method as the locally optimal block preconditioned conjugate gradient eigensolver. We demonstrate practical effectiveness of the latter for a model problem, where it compares favorably with two well-known Jacobi-Davidson type methods, JDQR and JDCG.


Advances in Computational Mathematics | 1995

A Subspace Preconditioning Algorithm For Eigenvector/Eigenvalue Computation

James H. Bramble; Andrew V. Knyazev; Joseph E. Pasciak

We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigenspaces of a symmetric positive definite operatorA defined on a finite dimensional real Hilbert spaceV. In our applications, the dimension ofV is large and the cost of invertingA is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning forA. Estimates will be provided which show that the preconditioned method converges linearly when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.


SIAM Journal on Scientific Computing | 2007

Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in Hypre and PETSc

Andrew V. Knyazev; Merico E. Argentati; Ilya Lashuk; Evgueni E. Ovtchinnikov

We describe our software package Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) recently publicly released. BLOPEX is available as a stand-alone serial library, as an external package to PETSc (Portable, Extensible Toolkit for Scientific Computation, a general purpose suite of tools developed by Argonne National Laboratory for the scalable solution of partial differential equations and related problems), and is also built into hypre (High Performance Preconditioners, a scalable linear solvers package developed by Lawrence Livermore National Laboratory). The present BLOPEX release includes only one solver—the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method for symmetric eigenvalue problems. hypre provides users with advanced high-quality parallel multigrid preconditioners for linear systems. With BLOPEX, the same preconditioners can now be efficiently used for symmetric eigenvalue problems. PETSc facilitates the integration of independently developed application modules, with strict attention to component interoperability, and makes BLOPEX extremely easy to compile and use with preconditioners that are available via PETSc. We present the LOBPCG algorithm in BLOPEX for hypre and PETSc. We demonstrate numerically the scalability of BLOPEX by testing it on a number of distributed and shared memory parallel systems, including a Beowulf system, SUN Fire 880, an AMD dual-core Opteron workstation, and IBM BlueGene/L supercomputer, using PETSc domain decomposition and hypre multigrid preconditioning. We test BLOPEX on a model problem, the standard 7-point finite-difference approximation of the 3-D Laplacian, with the problem size in the range of


SIAM Journal on Matrix Analysis and Applications | 2007

Steepest Descent and Conjugate Gradient Methods with Variable Preconditioning

Andrew V. Knyazev; Ilya Lashuk

10^5


Mathematics of Computation | 1997

New estimates for Ritz vectors

Andrew V. Knyazev

-


SIAM Journal on Numerical Analysis | 2006

New A Priori FEM Error Estimates for Eigenvalues

Andrew V. Knyazev; John E. Osborn

10^8


international conference on computational science | 2005

Towards a dynamic data driven application system for wildfire simulation

Jan Mandel; Lynn S. Bennethum; Mingshi Chen; Janice L. Coen; Craig C. Douglas; Leopoldo P. Franca; Craig J. Johns; Minjeong Kim; Andrew V. Knyazev; Robert Kremens; Vaibhav V. Kulkarni; Guan Qin; Anthony Vodacek; Jianjia Wu; Wei Zhao; Adam Zornes

.


SIAM Journal on Numerical Analysis | 1994

Preconditioned gradient-type iterative methods in a subspace for partial generalized symmetric eigenvalue problems

Andrew V. Knyazev; Alexander L. Skorokhodov

We analyze the conjugate gradient (CG) method with variable preconditioning for solving a linear system with a real symmetric positive definite (SPD) matrix of coefficients

Collaboration


Dive into the Andrew V. Knyazev's collaboration.

Top Co-Authors

Avatar

Merico E. Argentati

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eugene Vecharynski

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Dong Tian

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

N. S. Bakhvalov

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Evgueni E. Ovtchinnikov

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

Akshay Gadde

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Hassan Mansour

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ilya Lashuk

University of Colorado Denver

View shared research outputs
Researchain Logo
Decentralizing Knowledge