Vasile Drăgan
Romanian Academy
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vasile Drăgan.
Archive | 2013
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
Preliminaries to Probability Theory and Stochastic Differential Equations.- Exponential Stability and Lyapunov-Type Linear Equations.- Structural Properties of Linear Stochastic Systems.- The Riccati Equations of Stochastic Control.- Linear Quadratic Control Problem for Linear Stochastic Systems.- Stochastic Version of the Bounded Real Lemma and Applications.- Robust Stabilization of Linear Stochastic Systems.
Archive | 1999
Vasile Drăgan; Aristide Halanay
The general problem of stabilization of linear systems consists of designing a controller that uses information on a measurable output in order to influence the behavior of the state considered as a deviation from a desired equilibrium.
Systems & Control Letters | 1995
Vasile Drăgan; Aristide Halanay; Adrian Stoica
Abstract It is shown that under a generic assumption the suboptimally robust controller constructed according to Glover and McFarlane (1989) leads to a singularly perturbed system, which may be reduced in the usual way, avoiding thus the difficulties mentioned by Habets (1991). The reduced controller obtained by this procedure is in fact optimal.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
The problem of mean square exponential stability for a class of discrete-time linear stochastic systems subject to independent random perturbations and Markovian switching is investigated. Four different definitions of the concept of exponential stability in the mean square are introduced and it is shown that they are not always equivalent. One definition of the concept of mean square exponential stability is done in terms of the exponential stability of the evolution defined by a sequence of linear positive operators on an ordered Hilbert space. The other three definitions are given in terms of different types of exponential behavior of the trajectories of the considered system. In our approach the Markov chain is not prefixed. The only available information about the Markov chain is the sequence of probability transition matrices and the set of its states. In this way one obtains that if the system is affected by Markovian jumping the property of exponential stability is independent of the initial distribution of the Markov chain.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
The main goal of this chapter is to investigate several aspects of the problem of robust stability and robust stabilization for a class of discrete-time linear stochastic systems subject to sequences of independent random perturbations and Markov jump perturbations. As a measure of the robustness of the stability of an equilibrium of a nominal system a concept of stability radius is introduced. A crucial role in determining the lower bound of the stability radius is played by the norm of a linear bounded operator associated with the given plant. This operator is called the input.output operator and it is introduced in Section 8.2. In Section 8.3 a stochastic version of the so-called bounded real lemma is proved. This result provides an estimation of the norm of the input.output operator in terms of the feasibility of some linear matrix inequalities (LMIs) or in terms of the existence of stabilizing solutions of a discrete-time generalized algebraic Riccati-type equation. In Section 8.4 the stochastic version of the so-called small gain theorem is proved. Then this result is used to derive a lower bound of robustness with respect to linear structured uncertainties. In the second part of this chapter we consider the robust stabilization problem of systems subject to both multiplicative white noise and Markovian jumps with respect to some classes of parametric uncertainty. Based on the bounded real lemma we obtain a set of necessary and sufficient conditions for the existence of a stabilizing feedback gain that ensures a prescribed level of attenuation of the exogenous disturbance. We also show that in the case of full state measurement if the disturbance attenuation problem has a solution in a dynamic controller form then the same problem is solvable via a control in a state feedback form. Finally a problem of H∞ filtering is solved.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
In this introductory chapter we collect several definitions and basic results from probability theory which are used in the developments in the next chapters of the book. Our goal is to present in a unified way some concepts that are presented in different ways in other bibliographic sources. Also we want to establish the basic terminology used in this book. The known results in the field are presented without proofs indicating only the bibliographic source. The less-known results or those which are in less accessible bibliographic sources are presented with their proofs. In the last part of the chapter we describe the classes of stochastic systems under consideration in the book.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
In this chapter we study a class of discrete-time deterministic linear equations, namely discrete-time equations defined by sequences of positive linear operators acting on ordered Hilbert spaces. As we show in Chapter 3 such equations play a crucial role in the derivation of some useful criteria for exponential stability in the mean square of the stochastic systems considered in this book.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
In this chapter we present the stochastic version of some basic concepts in control theory, namely stabilizability, detectability, and observability. All these concepts are defined both in Lyapunov operator terms and in stochastic system terms. The definitions given in this chapter extend the corresponding definitions from the deterministic time-varying systems. Some examples show that stochastic observability does not always imply stochastic detectability and stochastic controllability does not necessarily imply stochastic stabilizability. As in the deterministic case the concepts of stochastic detectability and observability are used in some criteria of exponential stability in the mean square.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
In this chapter the problem of H2 control of a discrete-time linear system subject to Markovian jumping and independent random perturbations is considered. Several kinds of H2-type performance criteria (often called H2 norms) are introduced and characterized via solutions of some suitable linear equations on the spaces of symmetric matrices. The purpose of such performance criteria is to provide a measure of the effect of additive white noise perturbation over an output of the controlled system. Different aspects specific to the discrete-time framework are emphasized. Firstly, the problem of optimization of H2 norms is solved under the assumption that a full state vector is available for measurements. One shows that among all stabilizing controllers of higher dimension, the best performance is achieved by a zero-order controller. The corresponding feedback gain of the optimal controller is constructed based on the stabilizing solution of a system of discrete-time generalized Riccati equations. The case of discrete-time linear stochastic systems with coefficients depending upon the states both at time t and at time t-1 of theMarkov chain is also considered. Secondly, the H2 optimization problem is solved under the assumption that only an output is available for measurements. The state space realization of the H2 optimal controller coincides with the stochastic version of the well-known Kalman-Bucy filter. In the construction of the optimal controller the stabilizing solutions of two systems of discrete-time coupled Riccati equations are involved. Because in the case of the systems affected by multiplicative white noise the optimal controller is hard to implement, a procedure for designing a suboptimal controller with the state space realization in a state estimator form is provided. Finally a problem of H2 filtering in the case of stochastic systems affected by multiplicative and additive white noise and Markovian switching is solved.
Archive | 2010
Vasile Drăgan; Toader Morozan; Adrian-Mihail Stoica
In this chapter several problems of the optimization of a quadratic cost functional along the trajectories of a discrete-time linear stochastic system affected by jumping Markov perturbations are independent random perturbations are investigated. In Section 6.2 we deal with the classical problem of the linear quadratic optimal regulator which means the minimization of a quadratic cost functional with definite sign along the trajectories of a controlled linear system. Also in Section 6.3 the general case of a linear quadratic optimization problem with a cost functional without sign is treated. It is shown that in the case of a linear quadratic optimal regulator, the optimal control is constructed via the minimal solution of a system of discrete-time Riccati-type equations, whereas in the general case of the linear quadratic optimization problem without sign, the optimal control, if it exists, is constructed based on the stabilizing solution of a system of discrete-time Riccati-type equations. In Section 6.4 we deal with the problem of the optimization of a quadratic cost functional of a discrete-time affine stochastic system affected by jumping Markov perturbations and independent random perturbations. Both the case of finite time horizon as well as the infinite time horizon are considered. Optimal control is constructed using the stabilizing solution for a system of discrete-time Riccati-type equations. A set of necessary and sufficient conditions ensuring the existence of the desired solutions of the discrete-time Riccati equations involved in this chapter were given in Chapter 5. A tracking problem is also solved.