Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youcef Saad is active.

Publication


Featured researches published by Youcef Saad.


Siam Journal on Scientific and Statistical Computing | 1986

GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems

Youcef Saad; Martin H. Schultz

We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from t...


IEEE Transactions on Computers | 1988

Topological properties of hypercubes

Youcef Saad; Martin H. Schultz

The n-dimensional hypercube is a highly concurrent loosely coupled multiprocessor based on the binary n-cube topology. Machines based on the hypercube topology have been advocated as ideal parallel architectures for their powerful interconnection features. The authors examine the hypercube from the graph-theory point of view and consider those features that make its connectivity so appealing. Among other things, they propose a theoretical characterization of the n-cube as a graph and and show how to map various other topologies into a hypercube. >


Siam Journal on Scientific and Statistical Computing | 1990

Hybrid Krylov methods for nonlinear systems of equations

Peter N. Brown; Youcef Saad

Several implementations of Newton-like iteration schemes based on Krylov subspace projection methods for solving nonlinear equations are considered. The simplest such class of methods is Newtons algorithm in which a (linear) Krylov method is used to solve the Jacobian system approximately. A method in this class is referred to as a Newton–Krylov algorithm. To improve the global convergence properties of these basic algorithms, hybrid methods based on Powells dogleg strategy are proposed, as well as linesearch backtracking procedures. The main advantage of the class of methods considered in this paper is that the Jacobian matrix is never needed explicitly.


Siam Journal on Scientific and Statistical Computing | 1989

Krylov subspace methods on supercomputers

Youcef Saad

This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Then polynomial preconditioning as an alternative to standard incomplete factorization techniques is discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these ideas and others is given in this article, as well as an attempt to comment on their effectiveness or potential for different types of architectures.


Siam Journal on Scientific and Statistical Computing | 1985

Practical Use of Polynomial Preconditionings for the Conjugate Gradient Method

Youcef Saad

This paper presents some practical ways of using polynomial preconditions for solving large sparse linear systems of equations issued from discretizations of partial differential equations. For a symmetric positive definite matrix A these techniques are based on least squares polynomials on the interval


Mathematics of Computation | 1984

Chebyshev acceleration techniques for solving nonsymmetric eigenvalue problems

Youcef Saad

[0,b]


Mathematics of Computation | 1985

Conjugate gradient-like algorithms for solving nonsymmetric linear systems

Youcef Saad; Martin H. Schultz

where b is the Gershgorin estimate of the largest eigenvalue. Therefore, as opposed to previous work in the field, there is no need for computing eigenvalues of A. We formulate a version of the conjugate gradient algorithm that is more suitable for parallel architectures and discuss the advantages of polynomial preconditioning in the context of these architectures.


Siam Journal on Optimization | 1994

Convergence Theory of Nonlinear Newton–Krylov Algorithms

Peter N. Brown; Youcef Saad

Abstract : The present paper deals with the problem of computing a few of the eigenvalues with largest (or smallest) real parts, of a large sparse nonsymmetric matrix. We present a general acceleration technique based on Chebyshev polynomials and discuss its practical application to Arnoldis method and the subspace iteration method. The resulting algorithms are compared with the classical ones in a few experiments which exhibit a sharp superiority of the Arnoldi-Chebyshev approach.


Mathematics of Computation | 1987

On the Lanczos Method for Solving Symmetric Linear Systems with Several Right-Hand-Sides.

Youcef Saad

This paper presents a unified formulation of a class of the conjugate gradient-like algorithms for solving nonsymmetric linear systems. The common framework is the Petrov- Galerkin method on Krylov subspaces. We discuss some practical points concerning the methods and point out some of the interrelations between them. 1. Introduction. In the recent few years, a large number of generalizations of the conjugate gradient and conjugate residual methods, which are very successful in solving symmetric positive-definite linear systems, have been proposed for solving nonsymmetric linear systems (3), (6), (5), (7), (12). In this paper we present an abstract framework which includes most of these methods and many new ones. Our goal is to understand the relationships among the methods and to synthesize. Consider the general linear system:


Siam Journal on Scientific and Statistical Computing | 1984

Practical Use of Some Krylov Subspace Methods for Solving Indefinite and Nonsymmetric Linear Systems

Youcef Saad

This paper presents some convergence theory for nonlinear Krylov subspace methods. The basic idea of these methods, which have been described by the authors in an earlier paper, is to use variants of Newton’s iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimension. The main focus of this paper is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

Collaboration


Dive into the Youcef Saad's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tony F. Chan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter N. Brown

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ilse C. F. Ipsen

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge