Yvan Notay
Université libre de Bruxelles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yvan Notay.
SIAM Journal on Scientific Computing | 2000
Yvan Notay
We analyze the conjugate gradient (CG) method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant proposed by Axelsson that performs an explicit orthogonalization of the search directions vectors. For this method, which we refer to as flexible CG, we develop a theoretical analysis that shows that the convergence rate is essentially independent of the variations in the preconditioner as long as the latter are kept sufficiently small. We further discuss the real convergence rate on the basis of some heuristic arguments supported by numerical experiments. Depending on the eigenvalue distribution corresponding to the fixed reference preconditioner, several situations have to be distinguished. In some cases, the convergence is as fast with truncated versions of the algorithm or even with the standard CG method, whereas quite large variations are allowed without too much penalty. In other cases, the flexible variant effectively outperforms the standard method, while the need for truncation limits the size of the variations that can be reasonably allowed.
SIAM Journal on Scientific Computing | 2012
Artem Napov; Yvan Notay
We consider the iterative solution of large sparse symmetric positive definite linear systems. We present an algebraic multigrid method which has a guaranteed convergence rate for the class of nonsingular symmetric M-matrices with nonnegative row sum. The coarsening is based on the aggregation of the unknowns. A key ingredient is an algorithm that builds the aggregates while ensuring that the corresponding two-grid convergence rate is bounded by a user-defined parameter. For a sensible choice of this parameter, it is shown that the recursive use of the two-grid procedure yields a convergence independent of the number of levels, provided that one uses a proper AMLI-cycle. On the other hand, the computational cost per iteration step is of optimal order if the mean aggregate size is large enough. This cannot be guaranteed in all cases but is analytically shown to hold for the model Poisson problem. For more general problems, a wide range of experiments suggests that there are no complexity issues and further demonstrates the robustness of the method. The experiments are performed on systems obtained from low order finite difference or finite element discretizations of second order elliptic partial differential equations (PDEs). The set includes two- and three-dimensional problems, with both structured and unstructured grids, some of them with local refinement and/or reentering corner, and possible jumps or anisotropies in the PDE coefficients.
Numerical Linear Algebra With Applications | 2002
Yvan Notay
To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, we consider the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems. We show that the coefficient matrix of these systems is indeed positive definite with the smallest eigenvalue bounded away from zero. We also establish a relation between the residual norm reduction in these inner linear systems and the convergence of the outer process towards the desired eigenpair. From a theoretical point of view, this allows to prove the optimality of the method, in the sense that solving the eigenproblem implies only a moderate overhead compared with solving a linear system. From a practical point of view, this allows to set up a stopping strategy for the inner iterations that minimizes this overhead by exiting precisely at the moment where further progress would be useless with respect to the convergence of the outer process. These results are numerically illustrated on some model example. Direct comparison with some other eigensolvers is also provided. Copyright
Numerical Linear Algebra With Applications | 2008
Yvan Notay; Panayot S. Vassilevski
We consider multigrid (MG) cycles based on the recursive use of a two-grid method, in which the coarse-grid system is solved by μ>1 steps of a Krylov subspace iterative method. The approach is further extended by allowing such inner iterations only at the levels of given multiplicity, whereas V-cycle formulation is used at all other levels. For symmetric positive definite systems and symmetric MG schemes, we consider a flexible (or generalized) conjugate gradient method as Krylov subspace solver for both inner and outer iterations. Then, based on some algebraic (block matrix) properties of the V-cycle MG viewed as a preconditioner, we show that the method can have optimal convergence properties if μ is chosen to be sufficiently large. We also formulate conditions that guarantee both, optimal complexity and convergence, bounded independently of the number of levels. Our analysis shows that the method is, at least, as effective as the standard W-cycle, whereas numerical results illustrate that it can be much faster than the latter, and actually more robust than predicted by the theory. Copyright
SIAM Journal on Scientific Computing | 2012
Yvan Notay
We consider the iterative solution of large sparse linear systems arising from the upwind finite difference discretization of convection-diffusion equations. The system matrix is then an M-matrix with nonnegative row sum, and, further, when the convective flow has zero divergence, the column sum is also nonnegative, possibly up to a small correction term. We investigate aggregation-based algebraic multigrid methods for this class of matrices. A theoretical analysis is developed for a simplified two-grid scheme with one damped Jacobi postsmoothing step. An uncommon feature of this analysis is that it applies directly to problems with variable coefficients; e.g., to problems with recirculating convective flow. On the basis of this theory, we develop an approach in which a guarantee is given on the convergence rate thanks to an aggregation algorithm that allows an explicit control of the location of the eigenvalues of the preconditioned matrix. Some issues that remain beyond the analysis are discussed in the...
Numerische Mathematik | 1993
Yvan Notay
SummaryWe investigate here rounding error effects on the convergence rate of the conjugate gradients. More precisely, we analyse on both theoretical and experimental basis how finite precision arithmetic affects known bounds on iteration numbers when the spectrum of the system matrix presents small or large isolated eigenvalues.
Computer Physics Communications | 2007
Matthias Bollhöfer; Yvan Notay
A new software code for computing selected eigenvalues and associated eigenvectors of a real symmetric matrix is described. The eigenvalues are either the smallest or those closest to some specified target, which may be in the interior of the spectrum. The underlying algorithm combines the Jacobi–Davidson method with efficient multilevel incomplete LU (ILU) preconditioning. Key features are modest memory requirements and robust convergence to accurate solutions. Parameters needed for incomplete LU preconditioning are automatically computed and may be updated at run time depending on the convergence pattern. The software is easy to use by non-experts and its top level routines are written in FORTRAN 77. Its potentialities are demonstrated on a few applications taken from computational physics.
SIAM Journal on Matrix Analysis and Applications | 2002
Yvan Notay
We consider the computation of the smallest eigenvalue and associated eigenvector of a Hermitian positive definite pencil. Rayleigh quotient iteration (RQI) is known to converge cubically, and we first analyze how this convergence is affected when the arising linear systems are solved only approximately. We introduce a special measure of the relative error made in the solution of these systems and derive a sharp bound on the convergence factor of the eigenpair in a function of this quantity. This analysis holds independently of the way the linear systems are solved and applies to any type of error. For instance, it applies to rounding errors as well. We next consider the Jacobi--Davidson method. It acts as an inexact RQI method in which the use of iterative solvers is made easier because the arising linear systems involve a projected matrix that is better conditioned than the shifted matrix arising in classical RQI. We show that our general convergence result straightforwardly applies in this context and permits us to trace the convergence of the eigenpair in a function of the number of inner iterations performed at each step. On this basis, we also compare this method with some form of inexact inverse iteration, as recently analyzed by Neymeyr and Knyazev.
International Journal for Numerical Methods in Engineering | 1996
Pascal Saint-Georges; Guy Warzée; Robert Beauwens; Yvan Notay
The preconditioned conjugate gradient algorithm is a well-known and powerful method used to solve large sparse symmetric positive definite linear systems Such system are generated by the finite element discretisation in structural analysis but users of finite elements in this contest generally still rely on direct methods It is our purpose in the present work to highlight the improvement brought forward by some new preconditioning techniques and show that the preconditioned conjugate gradient method performs better than efficient direct methods.
SIAM Journal on Scientific Computing | 2008
Adrian C. Muresan; Yvan Notay
Aggregation-based multigrid with standard piecewise constant like prolongation is investigated. Unknowns are aggregated either by pairs or by quadruplets; in the latter case the grouping may be either linewise or boxwise. A Fourier analysis is developed for a model two-dimensional anisotropic problem. Most of the results are stated for an arbitrary smoother (which fits with the Fourier analysis framework). It turns out that the convergence factor of two-grid schemes can be bounded independently of the grid size. With a sensible choice of the (linewise or boxwise) coarsening, the bound is also uniform with respect to the anisotropy ratio, without requiring a specialized smoother. The bound is too large to guarantee optimal convergence properties with the V-cycle or the standard W-cycle, but a W-cycle scheme accelerated by the recursive use of the conjugate gradient method exhibits near grid independent convergence.