Andreas Frommer
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Frommer.
Journal of Computational and Applied Mathematics | 2000
Andreas Frommer; Daniel B. Szyld
Asynchronous iterations arise naturally on parallel computers if one wants to minimize idle times. This paper reviews certain models of asynchronous iterations, using a common theoretical framework. The corresponding convergence theory and various domains of applications are presented. These include nonsingular linear systems, nonlinear systems, and initial value problems.
Numerische Mathematik | 1989
Andreas Frommer
SummaryLinear multisplitting methods are known as parallel iterative methods for solving a linear systemAx=b. We extend the idea of multisplittings to the problem of solving a nonlinear system of equationsF(x)=0. Our nonlinear multisplittings are based on several nonlinear splittings of the functionF. In a parallel computing environment, each processor would have to calculate the exact solution of an individual nonlinear system belonging to ‘his’ nonlinear multisplitting and these solutions are combined to yield the next iterate. Although the individual systems are usually much less involved than the original system, the exact solutions will in general not be available. Therefore, we consider important variants where the exact solutions of the individual systems are approximated by some standard method such as Newtons method. Several methods proposed in literature may be regarded as special nonlinear multisplitting methods. As an application of our systematic approach we present a local convergence analysis of the nonlinear multisplitting methods and their variants. One result is that the local convergence of these methods is determined by an induced linear multisplitting of the Jacobian ofF.
parallel computing | 1990
U. Block; Andreas Frommer; Günter Mayer
Abstract We introduce the concept of a block colouring which we use to obtain parallel versions of the SOR method. Block colourings are particularly well suited for banded linear systems and finite difference discretizations. We show that on a local memory parallel computer block colour SOR often induces less communicational overhead than the traditional multicolour SOR method. This is confirmed by numerical experiments with two simple examples on a 64-processor binary tree computer.
Numerische Mathematik | 1988
Andreas Frommer
SummaryApplying Newtons method to a particular system of nonlinear equations we derive methods for the simultaneous computation of all zeros of generalized polynomials. These generalized polynomials are from a function space satisfying a condition similar to Haars condition. By this approach we bring together recent methods for trigonometric and exponential polynomials and a well-known method for ordinary polynomials. The quadratic convergence of these methods is an immediate consequence of our approach and needs not to be proved explicitly. Moreover, our approach yields new interesting methods for ordinary, trigonometric and exponential polynomials and methods for other functions occuring in approximation theory.
Numerical Functional Analysis and Optimization | 1991
Andreas Frommer
We extend the idea of asynchronous iterations to self-mappings of product spaces with infinitely many components. In addition to giving a rather general convergence theorem we study in some detail the case of isotone and isotonically decomposable mappings in partially ordered spaces. In particular, we obtain relationships between asynchronous iterations and the total step method and results on enclosures for fixed points. They appear to be new, even for mappings defined on a product space with only finitely many components.
Journal of Computational and Applied Mathematics | 1991
Andreas Frommer
Abstract We introduce a concept of generalized diagonal dominance for nonlinear functions. As in the linear case, this brings together several, apparently different classes of nonlinear functions such as strictly diagonally dominant functions and certain M-functions. With our concept we easily obtain a quite far-reaching result on the global convergence of asynchronous iterative methods for finding zeros of nonlinear functions. Special cases include some known and several new convergence results for special iterative methods such as the nonlinear JOR-, SOR-and SSOR-method.
Journal of Computational and Applied Mathematics | 1995
Andreas Frommer; Hartmut Schwandt
We consider interval arithmetic based parallel methods for enclosing solutions of nonlinear systems of equations, where processors are allowed to proceed asynchronously. We present a general study on the convergence of asynchronous iterations in interval spaces and apply them to a variety of methods for the nonlinear equations case. The convergence results turn out to be very similar to those known for synchronous methods. Several practical examples on shared memory architectures are included. The asynchronous methods sometimes perform substantially better than their synchronous counterparts.
Numerische Mathematik | 1987
Andreas Frommer
SummaryIn many cases when Newtons method, applied to a nonlinear sytemF(x)=0, produces a monotonically decreasing sequence of iterates, Browns method converges monotonically, too. We compare the iterates of Browns and Newtons method in these monotone cases with respect to the natural partial ordering. It turns out that in most of the cases arising in applications Browns method then produces “better” iterates than Newtons method.
Computing | 1989
Andreas Frommer; Günter Mayer
For some systems of nonlinear equationsF(x)=0 we derive an algorithm which iteratively constructs tight lower and upper bounds for the zeros ofF. The algorithm is based on a multisplitting of certain matrices thus showing a natural parallelism. We prove criteria for the convergence of the bounds towards the zeros and we investigate the speed of convergence.ZusammenfassungFür gewisse Systeme nichtlinearer GleichungenF(x)=0 entwicklen wir ein Verfahren, welches iterativ enge untere und obere Schranken für die Nullstellen vonF berechnet. Das Verfahren beruht auf einem Multisplitting für bestimmte Matrizen und weist so in natürlcher Weise Parallelität auf. Wir geben Kriterien für die Konvergenz der Schranken gegen die Nullstellen an und untersuchen die Konvergenzgeschwindigkeit.
Computing | 1990
Andreas Frommer; Günter Mayer
We consider modifications of the interval Newton method which combine two ideas: Reusing the same evaluation of the Jacobian several (says) times and approximately solving the Newton equation by some ‘linear’ iterative process. We show in particular that theR-order of these methods may becomes+1. We illustrate our results by a numerical example.ZusammenfassungWir betrachten Modifikationen des Intervall-Newton-Verfahrens, welche zwei Ansätze miteinander verbinden: Mehrfache (z.B.s-fache) Verwendung derselben Auswertung der Jacobi-Matrix und näherungsweise Lösung der Newton-Gleichung mit einem „linearen” Iterationsprozeß. Insbesondere zeigen wir, daß dieR-Ordnung dieser Verfahrens+1 werden kann. Wir illustrieren unsere Ergebnisse an einem numerischen Beispiel.