Matthias Hocks
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Hocks.
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
One of the most important tasks in scientific computing is the problem of finding zeros (or roots) of nonlinear functions. In classical numerical analysis, root-finding methods for nonlinear functions begin with an approximation and apply an iterative method (such as Newton’s or Halley’s methods), which hopefully improves the approximation. It is a myth that no numerical algorithm is able to compute all zeros of a nonlinear equation with guaranteed error bounds, or even more, that no method is able to give concrete information about the existence and uniqueness of solutions of such a problem.
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
We consider the complex polynomial p: C → C defined by
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
p(z)=\sum\limits_{i=0}^n{{p_i}{z^i}},{p_i}\in\mathbb{C}{\text{, }}i=0, . . . , n, {p_n}\ne 0.
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
(1) (9.1) The Fundamental Theorem of algebra asserts that this polynomial has n zeros counted by multiplicity. Finding these roots is a non trivial problem in numerical mathematics. Most algorithms deliver only approximations of the exact zeros without any or with only weak statements concerning the accuracy.
Archive | 1993
Ulrich W. Kulisch; Rolf Hammer; Dietmar Ratz; Matthias Hocks
The evaluation of arithmetic expressions using floating-point arithmetic may lead to unpredictable results due to an accumulation of roundoff errors. As an example, the evaluation of x + 1 — x for x > 1020 using the standard floating-point format on almost every digital computer yields the wrong result 0. Since the evaluation of arithmetic expressions is a basic task in digital computing, we should have a method to evaluate a given expression to an arbitrary accuracy. We will develop such a method for real expressions composed of the operations +, —, •, /, and ↑,where ↑ denotes exponentiation by an integer. The method is based on the principle of iterative refinement as discussed in Section 3.8.
Archive | 1993
Ulrich W. Kulisch; Rolf Hammer; Dietmar Ratz; Matthias Hocks
In this chapter, we present a brief review of the C-XSC library, a C++ class library for eXtended Scientific Computation. For a complete library reference and examples, refer to [49].
Archive | 1995
Ulrich W. Kulisch; Rolf Hammer; Matthias Hocks; Dietmar Ratz
In Chapter 6, we considered the problem of finding zeros (or roots) of nonlinear functions of a single variable. Now, we consider its generalization, the problem of finding the solution vectors of a system of nonlinear equations. We give a method for finding all solutions of a nonlinear system of equations f(x) = 0 for a continuously differentiable function f: ℝn→ ℝn in a given interval vector (box). Our method computes close bounds on the solution vectors, and it delivers information about existence and uniqueness of the computed solutions. The method we present is a variant of the interval Gauss-Seidel method based on the method of Hansen and Sengupta [3], [32], and a modification of Ratz [79]. Our method makes use of the extended interval operations defined in Section 3.3.
Archive | 2011
Rolf Hammer; Matthias Hocks; Ulrich W. Kulisch; Dietmar Ratz
Finding the solution of a linear system of equations is one of the basic problems in numerical algebra. We will develop a verification algorithm for square systems with full matrix based on a Newton-like method for an equivalent fixed-point problem.