João Bosco Ribeiro do Val
State University of Campinas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by João Bosco Ribeiro do Val.
Journal of Economic Dynamics and Control | 1999
João Bosco Ribeiro do Val; Tamer Basar
Abstract We study a time-variant macroeconomic model in which some of the parameters are allowed to fluctuate in an exogenous form, according to a Markov chain. This feature allows us to model abrupt changes for improvement and degradation in terms of the intrinsic relations of the economic variables, and account for changes in the policy-makers preferences. Receding horizon control is well suited to systems with a modelled parameter fluctuation in the short and medium terms, but with unmodelled uncertainties in the long run. The problem features a partial information structure, since the changes in the economy may not be accessible, and to seek a computable solution we restrict attention to the class of linear feedback controls.
Systems & Control Letters | 2001
Eduardo F. Costa; João Bosco Ribeiro do Val
Abstract This paper presents a new detectability concept for discrete-time Markov jump linear systems with finite Markov state, which generalizes the MS-detectability concept found in the literature. The new sense of detectability can similarly assure that the solution of the coupled algebraic Riccati equation associated to the quadratic control problem is a stabilizing solution. In addition, the paper introduces a related observability concept that also generalizes previous concepts. A test for detectability based on a coupled matrix equation is derived from the definition, and a test for observability is presented, which can be performed in a finite number of steps. The results are illustrated by examples, including one that shows that a system may be detectable in the new sense but not in the MS sense.
Siam Journal on Control and Optimization | 2005
Eduardo F. Costa; João Bosco Ribeiro do Val; Marcelo D. Fragoso
This paper deals with detectability for the class of discrete-time Markov jump linear systems (MJLS) with the underlying Markov chain having countably infinite state space. The formulation here relates the convergence of the output with that of the state variables. Our approach introduces invariant subspaces for the autonomous system and exhibits the role that they play. This allows us to show that detectability can be written equivalently in term of two conditions: stability of the autonomous system in a certain invariant space and convergence of general state trajectories to this invariant space under convergence of input and output variables. This, in turn, provides the tools to show that detectability here generalizes uniform observability ideas as well as previous detectability notions for MJLS with finite state Markov chain, and allows us to solve the jump-linear-quadratic control problem. In addition, it is shown for the MJLS with finite Markov state that the second condition is redundant and that detectability retrieves previously well-known concepts in their respective scenarios.
IEEE Transactions on Control Systems and Technology | 2014
Ricardo C. L. F. Oliveira; Alessandro N. Vargas; João Bosco Ribeiro do Val; Pedro L. D. Peres
This brief presents a control strategy for Markov jump linear systems (MJLS) with no access to the Markov state (or mode). The controller is assumed to be in the linear state-feedback format and the aim of the control problem is to design a static mode-independent gain that minimizes a bound to the corresponding H2-cost. This approach has a practical appeal since it is often difficult to measure or to estimate the actual operating mode. The result of the proposed method is compared with that of a previous design, and its usefulness is illustrated by an application that considers the velocity control of a DC motor device subject to abrupt failures that is modeled as an MJLS.
Numerical Linear Algebra With Applications | 2013
Alessandro N. Vargas; Walter Furloni; João Bosco Ribeiro do Val
SUMMARY This paper addresses the optimal solution for the regulator control problem of Markov jump linear systems subject to second moment constraints. We can characterize and obtain the solution explicitly using linear matrix inequalities techniques. The constraints are imposed on the second moment of both the system state and control vector, and the optimal solution is obtained in a computable form. To illustrate the usefulness of the approach, specially that for systems subject to abrupt variations and physical limitations, we present an application for one joint of the European Robotic Arm. Copyright
Mathematics of Control, Signals, and Systems | 2011
Eduardo F. Costa; Alessandro N. Vargas; João Bosco Ribeiro do Val
This paper presents an analytic, systematic approach to handle quadratic functionals associated with Markov jump linear systems with general jumping state. The Markov chain is finite state, but otherwise general, possibly reducible and periodic. We study how the second moment dynamics are affected by the additive noise and the asymptotic behaviour, either oscillatory or invariant, of the Markov chain. The paper comprises a series of evaluations that lead to a tight two-sided bound for quadratic cost functionals. A tight two-sided bound for the norm of the second moment of the system is also obtained. These bounds allow us to show that the long-run average cost is well defined for system that are stable in the mean square sense, in spite of the periodic behaviour of the chain and taking into consideration that it may not be unique, as it may depend on the initial distribution. We also address the important question of approximation of the long-run average cost via adherence of finite horizon costs.
Journal of Mathematical Analysis and Applications | 2003
João Bosco Ribeiro do Val; Cristiane Nespoli; Yusef Caceres
Abstract This paper deals with a stochastic stability concept for discrete-time Markovian jump linear systems. The random jump parameter is associated to changes between the system operation modes due to failures or repairs, which can be well described by an underlying finite-state Markov chain. In the model studied, a fixed number of failures or repairs is allowed, after which, the system is brought to a halt for maintenance or for replacement. The usual concepts of stochastic stability are related to pure infinite horizon problems, and are not appropriate in this scenario. A new stability concept is introduced, named stochastic τ -stability that is tailored to the present setting. Necessary and sufficient conditions to ensure the stochastic τ -stability are provided, and the almost sure stability concept associated with this class of processes is also addressed. The paper also develops equivalences among second order concepts that parallels the results for infinite horizon problems.
Stochastic Analysis and Applications | 2005
Eduardo F. Costa; João Bosco Ribeiro do Val; Marcelo D. Fragosa
Abstract This paper introduces a concept of detectability for discrete-time infinite Markov jump linear systems that relates the stochastic convergence of the output with the stochastic convergence of the state. It is shown that the new concept generalizes a known stochastic detectability concept and, in the finite dimension scenario, it is reduced to the weak detectability concept. It is also shown that the detectability concept proposed here retrieves the well-known property of linear deterministic systems that observability is stricter than detectability.
European Journal of Control | 2004
Eduardo F. Costa; João Bosco Ribeiro do Val
The paper presents an algorithm for solving a perturbed algebraic Riccati equation, which involves a monotone operator and comprises the usual Riccati equation and coupled algebraic Riccati equations as particular cases. The method relies on iterations of Riccati equations for which solutions are ensured to exist and to be unique, via the adequate choice of certain parameters. We show that the method generates a monotonically increasing sequence that converges to the minimal solution of the original equation whenever it exists. In this case, the convergence is unconditionally ensured; no stability or any other condition is taken into account. Illustrative numerical examples are included.
IEEE Transactions on Control Systems and Technology | 2016
Alessandro N. Vargas; Leonardo P. Sampaio; Leonardo Acho; Lixian Zhang; João Bosco Ribeiro do Val
The note presents an algorithm for the average cost control problem of continuous-time Markov jump linear systems. The controller assumes a linear state-feedback form and the corresponding control gain does not depend on the Markov chain. In this scenario, the control problem is that of minimizing the long-run average cost. As an attempt to solve the problem, we derive a global convergent algorithm that generates a gain satisfying necessary optimality conditions. Our algorithm has practical implications, as illustrated by the experiments that were carried out to control an electronic dc-dc buck converter. The buck converter supplied a load that suffered abrupt changes driven by a homogeneous Markov chain. Besides, the source of the buck converter also suffered abrupt Markov-driven changes. The experimental results support the usefulness of our algorithm.