Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J.B.R. do Val is active.

Publication


Featured researches published by J.B.R. do Val.


IEEE Transactions on Automatic Control | 2000

Output feedback control of Markov jump linear systems in continuous-time

Daniela Pucci de Farias; José Claudio Geromel; J.B.R. do Val; Oswaldo Luiz V. Costa

This paper addresses the dynamic output feedback control problem of continuous-time Markovian jump linear systems. The fundamental point in the analysis is an LMI characterization, comprising all dynamical compensators that stabilize the closed-loop system in the mean square sense. The H/sub 2/ and H/sub /spl infin//-norm control problems are studied, and the H/sub 2/ and H/sub /spl infin// filtering problems are solved as a by product.


International Journal of Control | 1997

A convex programming approach to H2 control of discrete-time markovian jump linear systems

Oswaldo Luiz V. Costa; J.B.R. do Val; José Claudio Geromel

In this paper we consider the H-control problem for the class of discrete-time 2 linear systems with parameters subject to markovian jumps using a convex programming approach. We generalize the definition of the H norm from the deter2 ministic case to the markovian jump case and set a link between this norm and the observability and controllability gramians. Conditions for existence and derivation of a mean square stabilizing controller for a markovian jump linear system using convex analysis are established. The main contribution of the paper is to provide a convex programming formulation to the H-control problem, so that several 2 important cases, to our knowledge not analysed in previous work, can be addressed. Regarding the transition matrix = p for the Markov chain, two [ ] ij situations are considered: the case in which it is exactly known, and the case in which it is not exactly known but belongs to an appropriated convex set. Regarding the state variable and the jump variable, the cases in which t...


american control conference | 1999

Solutions for the linear quadratic control problem of Markov jump linear systems

J.B.R. do Val; José Claudio Geromel; Oswaldo Luiz V. Costa

The paper is concerned with recursive methods for obtaining the stabilizing solution of coupled algebraic Riccati equations arising in the linear-quadratic control of Markovian jump linear systems by solving at each iteration uncoupled algebraic Riccati equations. It is shown that the new updates carried out at each iteration represent approximations of the original control problem by control problems with receding horizon, for which some sequences of stopping times define the terminal time. Under this approach, unlike previous results, no initialization conditions are required to guarantee the convergence of the algorithms. The methods can be ordered in terms of number of iterations to reach convergence, and comparisons with existing methods in the current literature are also presented. Also, we extend and generalize current results in the literature for the existence of the mean-square stabilizing solution of coupled algebraic Riccati equations.


International Journal of Control | 2009

Robust stability, ℋ2 analysis and stabilisation of discrete-time Markov jump linear systems with uncertain probability matrix

Ricardo C. L. F. Oliveira; Alessandro N. Vargas; J.B.R. do Val; Pedro L. D. Peres

The stability and the problem of ℋ2 guaranteed cost computation for discrete-time Markov jump linear systems (MJLS) are investigated, assuming that the transition probability matrix is not precisely known. It is generally difficult to estimate the exact transition matrix of the underlying Markov chain and the setting has a special interest for applications of MJLS. The exact matrix is assumed to belong to a polytopic domain made up by known probability matrices, and a sequence of linear matrix inequalities (LMIs) is proposed to verify the stability and to solve the ℋ2 guaranteed cost with increasing precision. These LMI problems are connected to homogeneous polynomially parameter-dependent Lyapunov matrix of increasing degree g. The mean square stability (MSS) can be established by the method since the conditions that are sufficient, eventually turns out to also be necessary, provided that the degree g is large enough. The ℋ2 guaranteed cost under MSS is also studied here, and an extension to cope with the problem of control design is also introduced. These conditions are only sufficient, but as the degree g increases, the conservativeness of the ℋ2 guaranteed costs is reduced. Both mode-dependent and mode-independent control laws are addressed, and numerical examples illustrate the results.


IEEE Transactions on Automatic Control | 1998

Uncoupled Riccati iterations for the linear quadratic control problem of discrete-time Markov jump linear systems

J.B.R. do Val; José Claudio Geromel; Oswaldo Luiz V. Costa

This paper deals with recursive methods for solving coupled Riccati equations arising in the linear quadratic control for Markovian jump linear systems. Two algorithms, based on solving uncoupled Riccati equations at each iteration, are presented. The standard method for this problem relies on finite stage approximations with receding horizon, whereas the methods presented here are based on sequences of stopping times to define the terminal time of the approximating control problems. The methods can be ordered in terms of rate of convergence. Comparisons with other methods in the current literature are also presented.


american control conference | 2006

Constrained model predictive control of jump linear systems with noise and non-observed Markov state

Alessandro N. Vargas; Walter Furloni; J.B.R. do Val

This paper presents a variational method to the solution of the model predictive control (MPC) of discrete-time Markov jump linear systems (MJLS) subject to noisy inputs and a quadratic performance index. Constraints appear on system state and input control variables in terms of the first two moments of the processes. The information available to the controller does not involve observations of the Markov chain state and, to solve the problem a sequence of linear feedback gains that is independent of the Markov state is adopted. The necessary conditions of optimality are provided by an equivalent deterministic form of expressing the stochastic MPC control problem subject to the constraints. A numerical solution that attains the necessary conditions for optimality and provides the feedback gain sequence is proposed. The solution is sought by an iterative method performing a variational search using a LMI formulation that takes the state and input constraints into account


IEEE Transactions on Automatic Control | 2010

Average Cost and Stability of Time-Varying Linear Systems

Alessandro N. Vargas; J.B.R. do Val

In this technical note, the stability for time-varying discrete-time stochastic linear systems, with possibly unbounded trajectories, is associated to the existence of the long-run average cost criteria. Under controllability and observability, the stochastic system is stable in the sense that its state tends asymptotically to the origin in the mean. Further conditions related to a lower bound on the long-run average cost, or with periodic time-varying systems, provides uniform second moment stability.


European Journal of Operational Research | 2008

Stability and optimality of a multi-product production and storage system under demand uncertainty

Edilson F. Arruda; J.B.R. do Val

This work develops a discrete event model for a multi-product multi-stage production and storage (P&S) problem subject to random demand. The intervention problem consists of three types of possible decisions made at the end of one stage, which depend on the observed demand (or lack of) for each item: (i) to proceed further with the production of the same product, (ii) to proceed with the production of another product or (iii) to halt the production. The intervention problem is formulated in terms of dynamic programming (DP) operators and each possible solution induces an homogeneous Markov chain that characterizes the dynamics. However, solving directly the DP problem is not a viable task in situations involving a moderately large number of products with many production stages, and the idea of the paper is to detach from strict optimality with monitored precision, and rely on stability. The notion of stochastic stability brought to bear requires a finite set of positive recurrent states and the paper derives necessary and sufficient conditions for a policy to induce such a set in the studied P&S problem. An approximate value iteration algorithm is proposed, which applies to the broader class of control problems described by homogeneous Markov chains that satisfy a structural condition pointed out in the paper. This procedure iterates in a finite subset of the state space, circumventing the computational burden of standard dynamic programming. To benchmark the approach, the proposed algorithm is applied to a simple two-product P&S system.


conference on decision and control | 2004

Receding horizon control of Markov jump linear systems subject to noise and unobserved state chain

Alessandro N. Vargas; J.B.R. do Val; Eduardo F. Costa

We study the solution of receding horizon control of discrete-time Markov jump linear systems subject to exogenous inputs (noise). The performance index is quadratic and the information available to the controller does not involve observations of Markov chain states. To solve this problem, a sequence of linear feedback gains that is independent of the Markov state is adopted. We propose an interactive method based on a variational procedure which attains the solution to the problem, and an illustrative example is presented.


american control conference | 2006

Weak controllability and weak stabilizability concepts for linear systems with Markov jump parameters

Eduardo F. Costa; A.L.P. Manfrim; J.B.R. do Val

The paper introduces weak controllability and weak stabilizability concepts for discrete-time Markov jump linear system with finite Markov space. We introduce a collection of matrices Copf that resembles controllability matrices of deterministic linear systems. The collection of matrices Copf allows us to define a weak controllability concept by requiring that the matrices are full rank, as well as to introduce a weak stabilizability concept that is a dual of the weak detectability concept found in the literature of Markov jump systems. An important feature of the introduced concept is that it generalizes the concept of mean square stabilizability. The role that this concept plays in the scenario of the filtering problem is investigated through case studies, which suggest that weak stabilizability together with mean square detectability ensure that the state estimator is mean square stable. Illustrative examples are included

Collaboration


Dive into the J.B.R. do Val's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro N. Vargas

Basque Center for Applied Mathematics

View shared research outputs
Top Co-Authors

Avatar

Marcelo D. Fragoso

National Council for Scientific and Technological Development

View shared research outputs
Top Co-Authors

Avatar

E.F. Arruda

Pontifícia Universidade Católica do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Edilson F. Arruda

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pedro L. D. Peres

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge