Maxim Dolgov
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maxim Dolgov.
american control conference | 2013
Jörg Fischer; Achim Hekler; Maxim Dolgov; Uwe D. Hanebeck
This paper addresses the problem of sequence-based controller design for Networked Control Systems (NCS), where control inputs and measurements are transmitted over TCP-like network connections that are subject to random transmission delays and packet losses. To cope with the network effects, the controller not only sends the current control input to the actuator, but also a sequence of predicted control inputs at every time step. In this setup, we derive an optimal solution to the Linear Quadratic Gaussian (LQG) control problem and prove that the separation principle holds. Simulations demonstrate the improved performance of this optimal controller compared to other sequence-based approaches.
conference on decision and control | 2014
Gerhard Kurz; Igor Gilitschenski; Maxim Dolgov; Uwe D. Hanebeck
Estimation of angular quantities is a widespread issue, but standard approaches neglect the true topology of the problem and approximate directional with linear uncertainties. In recent years, novel approaches based on directional statistics have been proposed. However, these approaches have been unable to consider arbitrary circular correlations between multiple angles so far. For this reason, we propose a novel recursive filtering scheme that is capable of estimating multiple angles even if they are dependent, while correctly describing their circular correlation. The proposed approach is based on toroidal probability distributions and a circular correlation coefficient. We demonstrate the superiority to a standard approach based on the Kalman filter in simulations.
conference on decision and control | 2013
Jörg Fischer; Maxim Dolgov; Uwe D. Hanebeck
Sequence-based control is a well-established method applied in Networked Control Systems (NCS) to mitigate the effect of time-varying transmission delays and stochastic packet losses. The idea of this method is that the controller sends sequences of predicted control inputs to the actuator that can be applied in case a future transmission fails. In this paper, the stability properties of sequence-based LQG controllers are analyzed in terms of the boundedness of the long run average costs. On the one hand, we derive sufficient conditions, each for the boundedness and unboundedness of the costs. On the other hand, we give bounds on the minimal length of the control input sequence needed to stabilize a system.
IFAC Proceedings Volumes | 2013
Maxim Dolgov; Jörg Fischer; Uwe D. Hanebeck
Abstract In Networked Control Systems (NCS), data networks not only limit the amount of information exchanged by system components but are also subject to stochastic packet delays and losses. In this paper, we present a controller that simultaneously addresses these problems by combining event-based and sequence-based control methods. At every time step, the proposed controller calculates a sequence of predicted control inputs and based on the expected future LQG costs decides whether it transmits the control sequence to the actuator. The proposed controller is evaluated with simulations.
Archive | 2011
Maxim Dolgov; Nikolas Lentz; Stefan Fernsner; A. Bolz
The quality of cardiopulmonary resuscitation is important for the return of spontaneous circulation. To reach the recommended frequency and compression depth of chest compressions is difficult without any feedback devices. In this work we present a method to measure compression rate and depth during cardiopulmonary resuscitation with ultrasound. The method is based on cross-correlation of the received ultrasound signal. It is dynamical and requires no additional information on patient’s size, gender or age.
conference on decision and control | 2015
Maxim Dolgov; Gerhard Kurz; Uwe D. Hanebeck
In this paper, we consider finite-horizon predictive control of linear stochastic systems with chance constraints where the admissible region is a convex polytope. For this problem, we present a novel solution approach based on box approximations. The key notion of our approach consists of two steps. First, we apply a linear operation to the joint state probability density function such that its covariance is transformed into an identity matrix. This operation also defines the transformation of the state space and, therefore, of the admissible polytope. Second, we approximate the admissible region from the inside using axis-aligned boxes. By doing so, we obtain a conservative approximation of the constraint violation probability virtually in closed form (the expression contains Gaussian error functions). The presented control approach is demonstrated in a numerical example.
advances in computing and communications | 2015
Gerhard Kurz; Maxim Dolgov; Uwe D. Hanebeck
In this paper, we present an open-loop Stochastic Model Predictive Control (SMPC) method for discrete-time nonlinear systems whose state is defined on the unit circle. This modeling approach allows considering systems that include periodicity in a more natural way than standard approaches based on linear spaces. The main idea of this work is twofold: (i) we model the quantities of the system, i.e., the state, the measurements, and the noises, directly as circular quantities described by circular probability densities, and (ii) we apply deterministic sampling given in closed form to represent the occurring densities. The latter allows us to make the prediction required for solution of the SMPC problem tractable. We evaluate the proposed control scheme by means of simulations.
advances in computing and communications | 2015
Maxim Dolgov; Jörg Fischer; Uwe D. Hanebeck
In this paper, we consider infinite-horizon networked LQG control over multipurpose networks that do not provide acknowledgments (UDP-like networks). The information communicated over the network experiences transmission delays and losses that are modeled as stochastic processes. In oder to mitigate the delays and losses in the controller-actuator channel, the controller transmits sequences of predicted control inputs in addition to the current control input. To be able to reduce the impact of delays and losses in the feedback channel, the estimator computes the estimate using the M last measurements. In this scenario, the separation principle does not hold and the optimal control law is in general nonlinear. However, we show that by restricting the controller and the estimator to linear systems with constant gains, we can find the optimal solution. The presented control law is demonstrated in a numerical example.
IEEE Transactions on Automatic Control | 2017
Maxim Dolgov; Uwe D. Hanebeck
In this paper, we address infinite-horizon optimal control of Markov Jump Linear Systems (MJLS) via static output feedback. Because the jump parameter is assumed not to be observed, the optimal control law is nonlinear and intractable. Therefore, we assume the regulator to be linear. Under this assumption, we first present sufficient feasibility conditions for static output-feedback stabilization of MJLS with nonobserved mode in the mean square sense in terms of linear matrix inequalities (LMIs). However, these conditions depend on the particular state-space representation, i.e., a coordinate transform can make the LMIs feasible, while the original LMIs are infeasible. To avoid the issues with the ambiguity of the state-space representation, we, therefore, present an iterative algorithm for the computation of the regulator gain. The algorithm is shown to converge if the MJLS is stabilizable via mode-independent static output feedback. However, convergence of the algorithm is not sufficient for the stability of the closed loop, which requires an additional stability check after the regulator gains have been computed. A numerical example demonstrates the application of the presented results.
european control conference | 2016
Maxim Dolgov; Christof Chlebek; Uwe D. Hanebeck
In this paper, we address control of Markov Jump Linear Systems without mode observation via dynamic output feedback. Because the optimal nonlinear control law for this problem is intractable, we assume a linear controller. Under this assumption, the control law computation can be expressed in terms of an optimization problem that involves Bilinear Matrix Inequalities. Alternatively, it is possible to cast the problem as a Linear Matrix Inequality by introducing additional linearity constraints and requiring that some system parameters are constant. However, this latter approach is very restricting and it introduces additional conservatism that can yield poor performance. Thus, we propose an alternative iterative algorithm that does not pose any non-standard restrictions and demonstrate it in a numerical example.