Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siegfried M. Rump is active.

Publication


Featured researches published by Siegfried M. Rump.


Developments in Reliable Computing | 1999

INTLAB — INTerval LABoratory

Siegfried M. Rump

INTLAB is a toolbox for Matlab supporting real and complex intervals, and vectors, full matrices and sparse matrices over those. It is designed to be very fast. In fact, it is not much slower than the fastest pure floating point algorithms using the fastest compilers available (the latter, of course, without verification of the result). Beside the basic arithmetical operations, rigorous input and output, rigorous standard functions, gradients, slopes and multiple precision arithmetic is included in INTLAB. Portability is assured by implementing all algorithms in Matlab itself with exception of exactly one routine for switching the rounding downwards, upwards and to nearest. Timing comparisons show that the used concept achieves the anticipated speed with identical code on a variety of computers, ranging from PC’s to parallel computers. INTLAB is freeware and may be copied from our home page.


SIAM Journal on Scientific Computing | 2005

Accurate Sum and Dot Product

Takeshi Ogita; Siegfried M. Rump; Shin'ichi Oishi

Algorithms for summation and dot product of floating-point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or K-fold working precision,


Archive | 1994

Verification methods for dense and sparse systems of equations

Siegfried M. Rump

K\ge 3


Proc. of the symposium on A new approach to scientific computation | 1983

Solving algebraic problems with high accuracy

Siegfried M. Rump

. For twice the working precision our algorithms for summation and dot product are some 40% faster than the corresponding XBLAS routines while sharing similar error estimates. Our algorithms are widely applicable because they require only addition, subtraction, and multiplication of floating-point numbers in the same working precision as the given data. Higher precision is unnecessary, algorithms are straight loops without branch, and no access to mantissa or exponent is necessary.


Acta Numerica | 2010

Verification methods: Rigorous results using floating-point arithmetic

Siegfried M. Rump

In this paper we describe verification methods for dense and large sparse systems of linear and nonlinear equations. Most of the methods described have been developed by the author. Other methods are mentioned, but it is not intended to give an overview over existing methods. Many of the results are published in similar form in research papers or books. In this monograph we want to give a concise and compact treatment of some fundamental concepts of the subject. Moreover, many new results are included not being published elsewhere. Among them are the following. A new test for regularity of an interval matrix is given. It is shown that it is significantly better for classes of matrices. Inclusion theorems are formulated for continuous functions not necessarily being differentiable. Some extension of a nonlinear function w.r.t. a point x̃ is used which may be a slope, Jacobian or other. More narrow inclusions and a wider range of applicability (significantly wider input tolerances) are achieved by (i) using slopes rather than Jacobians, (ii) improvement of slopes for transcendental functions, (iii) a two-step approach proving existence in a small and uniqueness in a large interval thus allowing for proving uniqueness in much wider domains and significantly improving the speed, (iv) use of an Einzelschrittverfahren, (v) computing an inclusion of the difference w.r.t. an approximate solution. Methods for problems with parameter dependent input intervals are given yielding inner and outer inclusions. An improvement of the quality of inner inclusions is described. Methods for parametrized sparse nonlinear systems are given for expansion matrix being (i) M-matrix, (ii) symmetric positive definite, (iii) symmetric, (iv) general. A fast interval library having been developed at the author’s institute is presented being significantly faster compared to existing libraries.


Bit Numerical Mathematics | 1999

Fast and Parallel Interval Arithmetic

Siegfried M. Rump

Publisher Summary This chapter presents the new methods of solving algebraic problems with high accuracy. Examples of such problems are the solving of linear systems, eigenvalue/eigenvector determination, computing zeros of polynomials, sparse matrix problems, computation of the value of an arbitrary arithmetic expression, in particular, the value of a polynomial at a point, nonlinear systems, linear, quadratic, and convex programming over the field of real or complex numbers as well as over the corresponding interval spaces. All the algorithms based on new methods have some key properties in common: (1) every result is automatically verified to be correct by the algorithm; (2) the results are of high accuracy, that is, the error of every component of the result is of the magnitude of the relative rounding error unit; (3) the solution of the given problem is automatically shown to exist and to be unique within the given error bounds; and (4) the computing time is of the same order as comparable floating-point algorithm. The key property of the algorithms is that error control is performed automatically by the computer without any requirement on the part of the user, such as estimating spectral radii. The error bounds for all components of the inverse of the Hilbert 15 × 15 matrix are as small as possible, that is, left and right bounds differ only by one in the 12 place of the mantissa of each component. It is called least significant bit accuracy.


SIAM Journal on Scientific Computing | 2008

Accurate Floating-Point Summation Part I: Faithful Rounding

Siegfried M. Rump; Takeshi Ogita; Shin'ichi Oishi

A classical mathematical proof is constructed using pencil and paper. However, there are many ways in which computers may be used in a mathematical proof. But ‘proof by computer’, or even the use of computers in the course of a proof, is not so readily accepted (the December 2008 issue of the Notices of the American Mathematical Society is devoted to formal proofs by computer). In the following we introduce verification methods and discuss how they can assist in achieving a mathematically rigorous result. In particular we emphasize how floating-point arithmetic is used.


Computing | 1991

On the solution of interval linear systems

Siegfried M. Rump

Infimum-supremum interval arithmetic is widely used because of ease of implementation and narrow results. In this note we show that the overestimation of midpoint-radius interval arithmetic compared to power set operations is uniformly bounded by a factor 1.5 in radius. This is true for the four basic operations as well as for vector and matrix operations, over real and over complex numbers. Moreover, we describe an implementation of midpoint-radius interval arithmetic entirely using BLAS. Therefore, in particular, matrix operations are very fast on almost any computer, with minimal effort for the implementation. Especially, with the new definition it is seemingly the first time that full advantage can be taken of the speed of vector and parallel architectures. The algorithms have been implemented in the Matlab interval toolbox INTLAB.


Numerische Mathematik | 2002

Fast verification of solutions of matrix equations

Shin'ichi Oishi; Siegfried M. Rump

Given a vector of floating-point numbers with exact sum


Computing | 1987

Fortran-sc. a study of a fortran extension for engineering/scientific computation with access to acrith

J. H. Bleher; Siegfried M. Rump; Ulrich W. Kulisch; M. Metzger

s

Collaboration


Dive into the Siegfried M. Rump's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Ogita

Tokyo Woman's Christian University

View shared research outputs
Top Co-Authors

Avatar

K. Ozaki

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edgar W. Kaucher

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ulrich W. Kulisch

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ch. Ullrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gerd Bohlender

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rudi Klatte

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge