Christian Jansson
University of Hamburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Jansson.
Computing | 1991
Christian Jansson
The methods of Interval Arithmetic permit to calculate guaranteed a posteriori bounds for the solution set of problems with interval input data. At present, these methods assume that all input data vary independently between their given lower and upper bounds. This paper shows for special interval linear systems how to handle the case where dependencies of the input data occur.ZusammenfassungDie Intervallarithmetik erlaubt für verschiedene Problemstellungen die Berechnung von a posteriori Schranken für die zugehörige Lösungsmenge; dabei ist stets vorausgesetzt, daß alle Eingabedaten unabhängig voneinander zwischen vorgegebenen unteren und oberen Schranken variieren. Diese Veröffentlichung behandelt Methoden für spezielle lineare Intervallsysteme, die Abhängigkeiten der Eingabedaten mit berücksichtigen.
Siam Journal on Optimization | 2003
Christian Jansson
We consider the computation of rigorous lower and upper error bounds for the optimal value of linear programming problems. The input data of the lp-problem may be exactly given or may vary between given lower and upper bounds. The results are then verified for the family of lp-problems with input data inside these bounds. In many cases only a small computational effort is required. For problems with finite simple bounds, the rigorous lower bound of the optimal value can be computed with O(n2) operations. The error bounds can be used as well to perform a sensitivity analysis, provided the width of the uncertainties is not too large. Some numerical examples are presented.
Linear Algebra and its Applications | 1997
Christian Jansson
Abstract This paper presents some topological and graph theoretical properties of the solution set of linear algebraic systems with interval coefficients. Based on these properties, we describe a method which, in a finite number of steps, either calculates exact bounds for each component of the solution set, or finds a singular matrix within the interval coefficients. The calculation of exact bounds of the solution set is known to be NP-hard. Our method needs p calls of a polynomial-time algorithm, where p is the number of nonempty intersections of the solution set with the orthants. Frequently, due to physical or economical requirements, many variables do not change the sign. In those cases p is small, and our method works efficiently.
Computing | 1988
Christian Jansson
A Self-Validating Method for Solving Linear Programming Problems with Interval input Data. Linear programming problems are very important in many practical applications. They are usually solved by the simplex method. The computational results are, in general, good approximations to the solution of the problem. However, in some cases the computed approximation may be wrong due to round-off and cancellation errors. In practice it occurs frequently that the input data of a linear programming problem are not known exactly but are afflicted with tolerances. In this case it has to be precisely defined what a “solution” to such a problem is. A sensitivity or postoptimality analysis is necessary.
SIAM Journal on Matrix Analysis and Applications | 1999
Christian Jansson; Jiri Rohn
Checking regularity (or singularity) of interval matrices is a known NP-hard problem. In this paper a general algorithm for checking regularity/singularity is presented which is not a priori exponential. The algorithm is based on a theoretical result according to which regularity may be judged from any single component of the solution set of an associated system of linear interval equations. Numerical experiments (with interval matrices up to the size n = 50) confirm that this approach brings an essential decrease in the amount of operations needed.
Mathematical Methods of Operations Research | 1991
Christian Jansson; Siegfried M. Rump
This note gives a synopsis of new methods for solving linear systems and linear programming problems with uncertain data. All input data can vary between given lower and upper bounds. The methods calculate very sharp and guaranteed error bounds for the solution set of those problems and permit a rigorous sensitivity analysis. For problems with exact input data in general the calculated bounds differ only in the last bit in the mantissa, i.e. they are of maximum accuracy.
SIAM Journal on Numerical Analysis | 2007
Christian Jansson; Denis Chaykin; Christian Keil
A wide variety of problems in global optimization, combinatorial optimization, as well as systems and control theory can be solved by using linear and semidefinite programming. Sometimes, due to the use of floating point arithmetic in combination with ill-conditioning and degeneracy, erroneous results may be produced. The purpose of this article is to show how rigorous error bounds for the optimal value can be computed by carefully postprocessing the output of a linear or semidefinite programming solver. It turns out that in many cases the computational costs for postprocessing are small compared to the effort required by the solver. Numerical results are presented including problems from the SDPLIB and the NETLIB LP library; these libraries contain many ill-conditioned and real-life problems.
Journal of Computational and Applied Mathematics | 2003
Jürgen Garloff; Christian Jansson; Andrew P. Smith
Relaxation techniques for solving nonlinear systems and global optimisation problems require bounding from below the nonconvexities that occur in the constraints or in the objective function by affine or convex functions. In this paper we consider such lower bound functions in the case of problems involving multivariate polynomials. They are constructed by using Bernstein expansion. An error bound exhibiting quadratic convergence in the univariate case and some numerical examples are given.
Journal of Global Optimization | 1995
Christian Jansson; Olaf Knüppel
In this paper, we give a new branch and bound algorithm for the global optimization problem with bound constraints. The algorithm is based on the use of inclusion functions. The bounds calculated for the global minimum value are proved to be correct, all rounding errors are rigorously estimated. Our scheme attempts to exclude most “uninteresting” parts of the search domain and concentrates on its “promising” subsets. This is done as fast as possible (by involving local descent methods), and uses little information as possible (no derivatives are required). Numerical results for many well-known problems as well as some comparisons with other methods are given.
Journal of Global Optimization | 2004
Christian Jansson
In this paper, we consider the computation of a rigorous lower error bound for the optimal value of convex optimization problems. A discussion of large-scale problems, degenerate problems, and quadratic programming problems is included. It is allowed that parameters, whichdefine the convex constraints and the convex objective functions, may be uncertain and may vary between given lower and upper bounds. The error bound is verified for the family of convex optimization problems which correspond to these uncertainties. It can be used to perform a rigorous sensitivity analysis in convex programming, provided the width of the uncertainties is not too large. Branch and bound algorithms can be made reliable by using such rigorous lower bounds.