Ioannis Tzortzis
University of Cyprus
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ioannis Tzortzis.
IEEE Transactions on Automatic Control | 2014
Charalambos D. Charalambous; Ioannis Tzortzis; Sergey Loyka; Themistoklis Charalambous
The aim of this paper is to investigate extremum problems with pay-off being the total variation distance metric defined on the space of probability measures, subject to linear functional constraints on the space of probability measures, and vice-versa; that is, with the roles of total variation metric and linear functional interchanged. Utilizing concepts from signed measures, the extremum probability measures of such problems are obtained in closed form, by identifying the partition of the support set and the mass of these extremum measures on the partition. The results are derived for abstract spaces; specifically, complete separable metric spaces known as Polish spaces, while the high level ideas are also discussed for denumerable spaces endowed with the discrete topology. These extremum problems often arise in many areas, such as, approximating a family of probability distributions by a given probability distribution, maximizing or minimizing entropy subject to total variation distance metric constraints, quantifying uncertainty of probability distributions by total variation distance metric, stochastic minimax control, and in many problems of information, decision theory, and minimax theory.
Siam Journal on Control and Optimization | 2015
Ioannis Tzortzis; Charalambos D. Charalambous; Themistoklis Charalambous
The aim of this paper is to address optimality of stochastic control strategies via dynamic programming subject to total variation distance ambiguity on the conditional distribution of the controlled process. We formulate the stochastic control problem using minimax theory, in which the control minimizes the payoff while the conditional distribution, from the total variation distance set, maximizes it. First, we investigate the maximization of a linear functional on the space of probability measures on abstract spaces, among those probability measures which are within a total variation distance from a nominal probability measure, and then we give the maximizing probability measure in closed form. Second, we utilize the solution of the maximization to solve minimax stochastic control with deterministic control strategies, under a Markovian and a non-Markovian assumption, on the conditional distributions of the controlled process. The results of this part include (1) minimax optimization subject to total va...
conference on decision and control | 2011
Charalambos D. Charalambous; Ioannis Tzortzis; Farzad Rezaei
The aim of this paper is to address optimality of control strategies for stochastic discrete time control systems subject to conditional distribution uncertainty. This type of uncertainty is motivated from the fact that the value function involves expectation with respect to the conditional distribution. The issues which will be discussed are the following. 1) Optimal stochastic control systems subject to conditional distribution uncertainty, 2) optimality criteria for stochastic control systems with conditional distribution uncertainty, including principle of optimality and dynamic programming.
IEEE Transactions on Automatic Control | 2017
Charalambos D. Charalambous; Christos K. Kourtellaris; Ioannis Tzortzis
The control-coding capacity of stochastic control systems is introduced, and its operational meaning is established using randomized control strategies, which simultaneously control output processes encode information, and communicate information from control processes to output processes. The control-coding capacity is the analog Shannons coding-capacity of noisy channels. Furthermore, duality relations to stochastic optimal control problems with deterministic and randomized control strategies are identified including the following. First, extremum problems of stochastic optimal control with directed information payoff are equivalent to feedback capacity problems of information theory, in which the control system act as a communication channel. Second, for Gaussian linear decision models with average quadratic constraints, it is shown that optimal randomized strategies are Gaussian, and decompose into a deterministic part and a random part. The deterministic part is precisely the optimal strategy of the linear quadratic Gaussian stochastic optimal control problem, whereas the random part is the solution of an water-filling information transmission problem that encodes information, which is estimated by a decoder.
conference on decision and control | 2012
Charalambos D. Charalambous; Ioannis Tzortzis; Themistoklis Charalambous
The aim of this paper is to address optimality of stochastic control strategies via dynamic programming subject to total variational distance uncertainty on the conditional distribution of the controlled process. Utilizing concepts from signed measures, the maximization of a linear functional on the space of probability measures on abstract spaces is investigated, among those probability measures which are within a total variational distance from a nominal probability measure. The maximizing probability measure is found in closed form. These results are then applied to solve minimax stochastic control with deterministic control strategies, under a Markovian assumption on the conditional distributions of the controlled process. The results include: 1) Optimization subject to total variational distance constraints, 2) new dynamic programming recursions, which involve the oscillator seminorm of the value function.
IEEE Transactions on Automatic Control | 2017
Ioannis Tzortzis; Charalambos D. Charalambous; Themistoklis Charalambous; Christoforos N. Hadjicostis; Mikael Johansson
The aim of this paper is to approximate a Finite-State Markov (FSM) process by another process defined on a lower dimensional state space, called the approximating process, with respect to a total variation distance fidelity criterion. The approximation problem is formulated as an optimization problem using two different approaches. The first approach is based on approximating the transition probability matrix of the FSM process by a lower-dimensional transition probability matrix, resulting in an approximating process which is a Finite-State Hidden Markov (FSHM) process. The second approach is based on approximating the invariant probability vector of the original FSM process by another invariant probability vector defined on a lower-dimensional state space. Going a step further, a method is proposed based on optimizing a Kullback-Leibler divergence to approximate the FSHM processes by FSM processes. The solutions of these optimization problems are described by optimal partition functions which aggregate the states of the FSM process via a corresponding water-filling solution, resulting in lower-dimensional approximating processes which are FSHM or FSM processes. Throughout the paper, the theoretical results are justified by illustrative examples that demonstrate our proposed methodology.
conference on decision and control | 2014
Ioannis Tzortzis; Charalambos D. Charalambous; Themistoklis Charalambous; Christoforos N. Hadjicostis; Mikael Johansson
In this paper, we investigate the problem of aggregating a given finite-state Markov process by another process with fewer states. The aggregation utilizes total variation distance as a measure of discriminating the Markov process by the aggregate process, and aims to maximize the entropy of the aggregate process invariant probability, subject to a fidelity described by the total variation distance ball. An iterative algorithm is presented to compute the invariant distribution of the aggregate process, as a function of the invariant distribution of the Markov process. It turns out that the approximation method via aggregation leads to an optimal aggregate process which is a hidden Markov process, and the optimal solution exhibits a water-filling behavior. Finally, the algorithm is applied to specific examples to illustrate the methodology and properties of the approximations.
conference on decision and control | 2013
Charalambos D. Charalambous; Ioannis Tzortzis; Sergey Loyka; Themistoklis Charalambous
The aim of this paper is to investigate extremum problems with pay-off the total variational distance metric subject to linear functional constraints both defined on the space of probability measures, as well as related problems. Utilizing concepts from signed measures, the extremum probability measures of such problems are obtained in closed form, by identifying the partition of the support set and the mass of these extremum measures on the partition. The results are derived for abstract spaces, specifically, complete separable metric spaces, while the high level ideas are also discussed for denumerable spaces endowed with the discrete topology.
international symposium on information theory | 2017
Charalambos D. Charalambous; Christos K. Kourtellaris; Sergey Loyka; Ioannis Tzortzis
Feedback capacity is extended beyond classical communication channels, to stochastic dynamical systems, which may correspond to unstable control systems or unstable communication channels, subject to average cost constraints of total power κ e [0, to). It is shown that optimal conditional distributions or randomized strategies, have a dual role, to simultaneously control the output process and to encode information. The dual role is due to the interaction of control and information transmission; it states that encoders in communication channels operate as encoders-controllers, while controllers in control systems operate as controllers-encoders. The concepts are illustrated through the analysis of Gaussian control systems with randomized strategies, which are equivalent to Additive Gaussian Noise channels, Stable or Unstable, with arbitrary memory on past outputs, with an average constraint of quadratic form. It is shown that such unstable dynamical systems have Control-Coding Capacity which is operational, precisely as in Shannons operational definition. However, the control-coding capacity is zero, unless the power κ allocated to the system, exceeds a threshold Kmin, where Kmin is the minimum cost of ensuring asymptotic stability and ergodicity. The excess power κ — Kmin is turned into an achievable rate of information transmission over the dynamical system.
conference on decision and control | 2016
Charalambos D. Charalambous; Christos K. Kourtellaris; Ioannis Tzortzis
We show that stochastic dynamical control systems are capable of information transfer from control processes to output processes, with operational meaning as defined by Shannon. Moreover, we show that optimal control strategies have a dual role, specifically, (i) to transfer information from the control process to the output process, and (ii) to stabilize the output process. We illustrate that information transfer is feasible by considering general Gaussian Linear Decision Models, and relate it to the well-known Linear-Quadratic-Gaussian (LQG) control theory.