Ekkehard W. Sachs
University of Trier
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ekkehard W. Sachs.
SIAM Journal on Matrix Analysis and Applications | 2010
Roland Herzog; Ekkehard W. Sachs
Optimality systems and their linearizations arising in optimal control of partial differential equations with pointwise control and (regularized) state constraints are considered. The preconditioned conjugate gradient (PCG) method in a nonstandard inner product is employed for their efficient solution. Preconditioned condition numbers are estimated for problems with pointwise control constraints, mixed control-state constraints, and of Moreau-Yosida penalty type. Numerical results for elliptic problems demonstrate the performance of the PCG iteration. Regularized state-constrained problems in three dimensions with more than 750,000 variables are solved.
SIAM Journal on Numerical Analysis | 1992
Karl Kunisch; Ekkehard W. Sachs
Parameter estimation problems are formulated as constrained, regularized optimization problems. Reduced SQP methods with BFGS update are analyzed to solve these infinite-dimensional optimization problems. Rate of convergence results are given and numerical feasibility of the resulting algorithms is demonstrated.
Siam Journal on Control and Optimization | 1987
C. T. Kelley; Ekkehard W. Sachs
Quasi Newton methods play an important role in the numerical solution of problems in unconstrained optimization. Optimal control problems in their discretized form can be viewed as optimization problems and therefore be solved by quasi Newton methods. Since the discretized problems do not solve the original infinite-dimensional control problem but rather approximate it up to a certain accuracy, various approximations of the control problem need to be considered. It is known that an increase in the dimension of optimization problems can have a negative effect on the convergence rate of the quasi Newton method which is used to solve the problem. We want to investigate this behavior and to explain how this drawback can be avoided for a class of optimal control problems. We show how to use the infinite dimensional original problem to predict the speed of convergence of the BFGS-method [1, 7, 10, 22] for the finite-dimensional approximations. In several papers [6, 14, 24, 27] the DFP-method [4, 8] and its application to optimal control problems were considered but rates of convergence were given at best for quadratic problems. In [25, 26] a linear rate of convergence was proved in Hilbert spaces and applied to optimal control. All the applications to optimal control problems were carried out for finite dimensional approximations. This fact is important, because in [23] it was shown, that contrary to the finite dimensional case [2], the BFGS-method can converge very slowly when applied to an infinite dimensional problem. Hence it is desirable to know whether this convergence behavior can occur also for fine discretizations of control problems. Sufficient ([19]) and characteristic ([12]) conditions for the superlinear rate were given in other analyses. Like in the linear case for Broydens method [28] and the conjugate gradient method [3], [9] an additional assumption on the initial approximation of the Hessian, i.e. it approximates the true Hessian up to a compact operator, is needed to guarantee superlinear convergence, see [11]. In [9] a connection to quadratic control problems is shown. Here we want to consider nonlinear control problems and their discretization.
Siam Journal on Optimization | 1997
Tankred Rautert; Ekkehard W. Sachs
We consider the problem of designing feedback control laws when a complete set of state variables is not available. For linear autonomous systems with quadratic performance criterion, the design problem consists of choosing an appropriate matrix of feedback gains according to a certain objective function. In the literature, the performance of quasi-Newton methods has been reported to be substandard. We try to explain some of these observations and to propose structured quasi-Newton updates. These methods, which take into account the special structure of the problem, show considerable improvement in the convergence. Using test examples from optimal output feedback design, we also can verify these results numerically.
Siam Journal on Control and Optimization | 1999
Friedemann Leibfritz; Ekkehard W. Sachs
Optimal control problems with partial differential equations lead to large scale nonlinear optimization problems with constraints. An efficient solver which takes into account the structure and also the size of the problem is an inexact sequential quadratic programming method where the quadratic problems are solved iteratively. Based on a reformulation as a mixed nonlinear complementarity problem we give a measure of when to terminate the iterative quadratic program solver. For the latter we use an interior point algorithm. Under standard assumptions, local linear, superlinear, and quadratic convergence can be proved. The numerical application is an optimal control problem from nonlinear heat conduction.
SIAM Journal on Matrix Analysis and Applications | 2009
F. Feitzinger; T. Hylla; Ekkehard W. Sachs
In this paper we consider the numerical solution of the algebraic Riccati equation using Newtons method. We propose an inexact variant which allows one control the number of the inner iterates used in an iterative solver for each Newton step. Conditions are given under which the monotonicity and global convergence result of Kleinman also hold for the inexact Newton iterates. Numerical results illustrate the efficiency of this method.
Computational Optimization and Applications | 1992
F.-S. Kupfer; Ekkehard W. Sachs
We consider a control problem for a nonlinear diffusion equation with boundary input that occurs when heating ceramic products in a kiln. We interpret this control problem as a constrained optimization problem, and we develop a reduced SQP method that presents for this problem a new and efficient approach of its numerical solution. As opposed to Newtons method for the unconstrained problem, where at each iteration the state must be computed from a set of nonlinear equations,in the proposed algorithm only the linearized state equations need to be solved. Furthermore, by use of a secant update formula, the calculation of exact second derivatives is avoided. In this way the algorithm achieves a substantial decrease in the total cost compared to the implementation of Newtons method in [2]. Our method is practicable with regard to storage requirements, and by choosing an appropriate representation for the null space of the Jacobian of the constraints we are able to exploit the sparsity pattern of the Jacobian in the course of the iteration. We conclude with a presentation of numerical examples that demonstrate the fast two-step superlinear convergence behavior of the method.
SIAM Journal on Scientific Computing | 1994
C. T. Kelley; Ekkehard W. Sachs
In this paper the authors extend the multilevel algorithm of Atkinson and Brakhage for compact fixed point problems and the projected Newton method of Bertsekas to create a fast multilevel algorithm for parabolic boundary control problems having bound constraints on the control. Results are extended from finite dimension on constraint identification. This approach permits both adaptive integration in time and inexact evaluation of the cost functional.
Siam Journal on Control and Optimization | 2008
Ekkehard W. Sachs; Lizette Zietsman
In this paper we consider the convergence of the infinite dimensional version of the Kleinman-Newton algorithm for solving the algebraic Riccati operator equation associated with the linear quadratic regulator problem in a Hilbert space. We establish mesh independence for this algorithm and apply the result to systems governed by delay equations. Numerical examples are presented to illustrate the results.
Mathematical Programming | 1986
Ekkehard W. Sachs
Broydens method is formulated for the solution of nonlinear operator equations in Hilbert spaces. The algorithm is proven to be well defined and a linear rate of convergence is shown. Under an additional assumption on the initial approximation for the derivative we prove the superlinear rate of convergence.