Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel B. Szyld is active.

Publication


Featured researches published by Daniel B. Szyld.


Numerical Linear Algebra With Applications | 2007

RECENT COMPUTATIONAL DEVELOPMENTS IN KRYLOV SUBSPACE METHODS FOR LINEAR SYSTEMS

Valeria Simoncini; Daniel B. Szyld

Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright


Journal of Computational and Applied Mathematics | 2000

On asynchronous iterations

Andreas Frommer; Daniel B. Szyld

Asynchronous iterations arise naturally on parallel computers if one wants to minimize idle times. This paper reviews certain models of asynchronous iterations, using a common theoretical framework. The corresponding convergence theory and various domains of applications are presented. These include nonsingular linear systems, nonlinear systems, and initial value problems.


Numerische Mathematik | 1992

H-Splittings and two-stage iterative methods

Andreas Frommer; Daniel B. Szyld

SummaryConvergence of two-stage iterative methods for the solution of linear systems is studied. Convergence of the non-stationary method is shown if the number of inner iterations becomes sufficiently large. TheR1-factor of the two-stage method is related to the spectral radius of the iteration matrix of the outer splitting. Convergence is further studied for splittings ofH-matrices. These matrices are not necessarily monotone. Conditions on the splittings are given so that the two-stage method is convergent for any number of inner iterations.


SIAM Journal on Scientific Computing | 2003

Theory of Inexact Krylov Subspace Methods and Applications to Scientific Computing

Valeria Simoncini; Daniel B. Szyld

We provide a general framework for the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This framework allows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Fraysse, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrix-vector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.


SIAM Journal on Scientific Computing | 1999

Orderings for Incomplete Factorization Preconditioning of Nonsymmetric Problems

Michele Benzi; Daniel B. Szyld; Arno C. N. van Duin

Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that certain reorderings for direct methods, such as reverse Cuthill--McKee, can be very beneficial. The benefit can be seen in the reduction of the number of iterations and also in measuring the deviation of the preconditioned operator from the identity.


Numerische Mathematik | 1990

Convergence of nested classical iterative methods for linear systems

Paul J. Lanzkron; Donald J. Rose; Daniel B. Szyld

SummaryClassical iterative methods for the solution of algebraic linear systems of equations proceed by solving at each step a simpler system of equations. When this system is itself solved by an (inner) iterative method, the global method is called a two-stage iterative method. If this process is repeated, then the resulting method is called a nested iterative method. We study the convergence of such methods and present conditions on the splittings corresponding to the iterative methods to guarantee convergence forany number of inner iterations. We also show that under the conditions presented, the spectral radii of the global iteration matrices decrease when the number of inner iterations increases. The proof uses a new comparison theorem for weak regular splittings. We extend our results to larger classes of iterative methods, which include iterative block Gauss-Seidel. We develop a theory for the concatenation of such iterative methods. This concatenation appears when different numbers of inner interations are performed at each outer step. We also analyze block methods, where different numbers of inner iterations are performed for different diagonal blocks.


SIAM Journal on Numerical Analysis | 2002

Flexible Inner-Outer Krylov Subspace Methods

Valeria Simoncini; Daniel B. Szyld

Flexible Krylov methods refers to a class of methods which accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax=b, instead of having a fixed preconditioner M and the (right) preconditioned equation AM-1 y = b (Mx =y), one may have a different matrix, say Mk, at each step. In this paper, the case where the preconditioner itself is a Krylov subspace method is studied. There are several papers in the literature where such a situation is presented and numerical examples given. A general theory is provided encompassing many of these cases, including truncated methods. The overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space. We show how this subspace keeps growing as the outer iteration progresses, thus providing a convergence theory for these inner-outer methods. Numerical tests illustrate some important implementation aspects that make the discussed inner-outer methods very appealing in practical circumstances.


Numerische Mathematik | 1999

Weighted max norms, splittings, and overlapping additive Schwarz iterations

Andreas Frommer; Daniel B. Szyld

Summary. Weighted max-norm bounds are obtained for Algebraic Additive Schwarz Iterations with overlapping blocks for the solution of Ax = b, when the coefficient matrix A is an M-matrix. The case of inexact local solvers is also covered. These bounds are analogous to those that exist using A-norms when the matrix A is symmetric positive definite. A new theorem concerning P-regular splittings is presented which provides a useful tool for the A-norm bounds. Furthermore, a theory of splittings is developed to represent Algebraic Additive Schwarz Iterations. This representation makes a connection with multisplitting methods. With this representation, and using a comparison theorem, it is shown that a coarse grid correction improves the convergence of Additive Schwarz Iterations when measured in weighted max norm.


Numerische Mathematik | 1990

Comparison theorems for weak splittings of bounded operators

Ivo Marek; Daniel B. Szyld

SummaryComparison theorems for weak splittings of bounded operators are presented. These theorems extend the classical comparison theorem for regular splittings of matrices by Varga, the less known result by Woźnicki, and the recent results for regular and weak regular splittings of matrices by Neumann and Plemmons, Elsner, and Lanzkron, Rose and Szyld. The hypotheses of the theorems presented here are weaker and the theorems hold for general Banach spaces and rather general cones. Hypotheses are given which provide strict inequalities for the comparisons. It is also shown that the comparison theorem by Alefeld and Volkmann applies exclusively to monotone sequences of iterates and is not equivalent to the comparison of the spectral radius of the iteration operators.


SIAM Journal on Matrix Analysis and Applications | 1992

Two-stage and multisplitting methods for the parallel solution of linear systems

Daniel B. Szyld; Mark T. Jones

Two-stage and multisplitting methods for the parallel solution of linear systems are studied. A two-stage multisplitting method is presented that reduces to each of the others in particular cases. Conditions for its convergence are given. In the particular case of a multisplitting method related to block Jacobi, it is shown that it is equivalent to a two-stage method with only one inner iteration per outer iteration. A fixed number of iterations of this method, say, p, is compared with a two-stage method with p inner iterations. The asymptotic rate of convergence of the first method is faster, but, depending on the structure of the matrix and the parallel architecture, it takes more time to converge. This is illustrated with numerical experiments.

Collaboration


Dive into the Daniel B. Szyld's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivo Marek

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Fei Xue

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reinhard Nabben

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Rafael Bru

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge