Evgueni E. Ovtchinnikov
University of Westminster
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evgueni E. Ovtchinnikov.
SIAM Journal on Scientific Computing | 2007
Andrew V. Knyazev; Merico E. Argentati; Ilya Lashuk; Evgueni E. Ovtchinnikov
We describe our software package Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) recently publicly released. BLOPEX is available as a stand-alone serial library, as an external package to PETSc (Portable, Extensible Toolkit for Scientific Computation, a general purpose suite of tools developed by Argonne National Laboratory for the scalable solution of partial differential equations and related problems), and is also built into hypre (High Performance Preconditioners, a scalable linear solvers package developed by Lawrence Livermore National Laboratory). The present BLOPEX release includes only one solver—the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method for symmetric eigenvalue problems. hypre provides users with advanced high-quality parallel multigrid preconditioners for linear systems. With BLOPEX, the same preconditioners can now be efficiently used for symmetric eigenvalue problems. PETSc facilitates the integration of independently developed application modules, with strict attention to component interoperability, and makes BLOPEX extremely easy to compile and use with preconditioners that are available via PETSc. We present the LOBPCG algorithm in BLOPEX for hypre and PETSc. We demonstrate numerically the scalability of BLOPEX by testing it on a number of distributed and shared memory parallel systems, including a Beowulf system, SUN Fire 880, an AMD dual-core Opteron workstation, and IBM BlueGene/L supercomputer, using PETSc domain decomposition and hypre multigrid preconditioning. We test BLOPEX on a model problem, the standard 7-point finite-difference approximation of the 3-D Laplacian, with the problem size in the range of
SIAM Journal on Numerical Analysis | 2008
Evgueni E. Ovtchinnikov
10^5
SIAM Journal on Numerical Analysis | 2006
Evgueni E. Ovtchinnikov
-
SIAM Journal on Numerical Analysis | 2003
Evgueni E. Ovtchinnikov
10^8
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 1998
Evgueni E. Ovtchinnikov; Leonidas Xanthis
.
Computer Methods in Applied Mechanics and Engineering | 1996
Evgueni E. Ovtchinnikov; Leonidas Xanthis
This paper is concerned with the convergence properties of iterative algorithms of conjugate gradient type applied to the computation of an extreme eigenvalue of a Hermitian operator via the optimization of the Rayleigh quotient functional. The algorithms in focus employ the line search for the extremum of the Rayleigh quotient functional in the direction that is a linear combination of the gradient and the previous search direction. An asymptotically explicit equation for the reduction in the eigenvalue error after the line search step as a function of the search direction is derived and applied to the analysis of the local convergence properties (i.e., the error reduction after an iteration) of the algorithms in question. The local (stepwise) asymptotic equivalence of various conjugate gradient algorithms and the generalized Davidson method is proved, and a new convergence estimate for conjugate gradient iterations is derived, showing the reduction in the eigenvalue error after any two consecutive iterations. The papers analysis extensively employs remarkable properties of the operator of the so-called Jacobi orthogonal complement correction equation.
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 2001
Evgueni E. Ovtchinnikov; Leonidas Xanthis
The paper is concerned with convergence estimates for the preconditioned steepest descent method for the computation of the smallest eigenvalue of a Hermitian operator. Available estimates are reviewed and new estimates are introduced that improve on the known ones in certain respects. In addition to the estimates for the error reduction after one iteration, we consider estimates for the so-called asymptotic convergence factor defined as the upper limit of the average error reduction per iteration. The paper focuses on sharp estimates, i.e., those that cannot be improved without using additional information.
Journal of Computational Physics | 2008
Evgueni E. Ovtchinnikov
The generalized Davidson (GD) method can be viewed as a generalization of the preconditioned steepest descent (PSD) method for solving symmetric eigenvalue problems. In the GD method, the new approximation is sought in the subspace that spans all the previous approximate eigenvectors, in addition to the current one and the preconditioned residual thereof used in PSD. In this respect, the relation between the GD and PSD methods is similar to that between the standard steepest descent method for linear systems and methods in Krylov subspaces. This paper presents convergence estimates for the (restarted) GD method that demonstrate convergence acceleration compared to the PSD method, similar to that achieved by methods in Krylov subspaces compared to the standard steepest descent.
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 1997
Evgueni E. Ovtchinnikov; Leonidas Xanthis
We present a methodology for the efficient solution of three–dimensional problems for thin elastic structures, such as shells, plates, rods, arches and beams, based on the synergy of two fundamental concepts: the subspace correction and the Korns type inequality in subspaces. The former provides the theoretical background which enables the development of modern iterative methods for large–scale problems, such as domain decomposition and multilevel methods, which are sine qua non for high–performance scientific and engineering computing, and the latter is responsible for the design of iterative methods for thin elastic structures with convergence which is uniform with respect to the thickness. The subspace correction methods are based on the decomposition of the space where the solution is sought into the sum of subspaces. In this paper we show that using the Korns type inequality in subspaces we can introduce subspace decompositions for which the convergence rate of the corresponding subspace correction methods is independent of both the thickness and the discretization parameters.
Comptes Rendus De L Academie Des Sciences Serie I-mathematique | 1997
Evgueni E. Ovtchinnikov; Leonidas Xanthis
We present a new Korns type inequality which estimates the ratio of the energy norm to the Sobolev norm in a given subspace in terms of the angle it forms with an explicitly extracted finite dimensional subspace. This inequality provides crucial information for improving the convergence of various iterative algorithms for elasticity problems in thin domains. This is demonstrated in the case of the semi-discrete iterative algorithm EDRA (see E.E. Ovtchinnikov and L.S. Xanthis, Effective dimensional reduction for elliptic problems, C.R. Acad. Sci. Paris, Series I, 320: 879–884, 1995). We show both theoretically and numerically that the convergence of the modified EDRA is independent of the thickness of the domain and of the semi-discretisation parameters.