Charalambos D. Charalambous
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charalambos D. Charalambous.
Stochastics and Stochastics Reports | 1996
Charalambos D. Charalambous; Joseph L. Hibey
This paper is concerned with the derivation of a minimum principle for partially observed controlled diffusions with correlation between signals and observation noises, when the sample cost criterion is an exponential of integral. Instead of considering the information state which satisfies a version of Zakais equation, measure-valued decompositions are employed to convert this equation into a P-a.s. deterministic equation, thus avoiding the basic finite dimensional approximation approach. Using weak control variations the minimum principle is derived. It consists of a modified version of Zakais equation, an adjoint process satisfying a stochastic partial differential equation with terminal condition, and a Hamiltonian functional. This minimum principle is then applied to solve the linear-exponential-quadratic-Gaussian trackmgjproblem. The results of this paper are important in relating H ∞ or robust control problems and risk-sensitive control problems
IEEE Transactions on Automatic Control | 1997
Charalambos D. Charalambous
This paper employs logarithmic transformations to establish relations between continuous-time nonlinear partially observable risk-sensitive control problems and analogous output feedback dynamic games. The first logarithmic transformation is introduced to relate the stochastic information state to a deterministic information state. The second logarithmic transformation is applied to the risk-sensitive cost function using the Laplace-Varadhan lemma. In the small noise limit, this cost function is shown to be logarithmically equivalent to the cost function of an analogous dynamic game.
Siam Journal on Control and Optimization | 1998
Charalambos D. Charalambous; Robert J. Elliott
This paper introduces certain nonlinear partially observable stochastic optimal control problems which are equivalent to completely observable control problems with finite-dimensional state space. In some cases the optimal control laws are analogous to linear-exponential-quadratic-Gaussian and linear-quadratic-Gaussian tracking problems. The problems discussed allow nonlinearities to enter the unobservable dynamics as gradients of potential functions. The methodology is based on explicit solutions of a modified Duncan--Mortensen--Zakai equation.
IEEE Transactions on Automatic Control | 1999
Joseph L. Hibey; Charalambos D. Charalambous
Continuous-time nonlinear stochastic differential state and measurement equations, all of which have coefficients capable of abrupt changes at a random time, are considered; finite-state jump Markov chains are used to model the changes. Conditional probability densities, which are essential in obtaining filtered estimates for these hybrid systems, are then derived. They are governed by a coupled system of stochastic partial differential equations. When the Q matrix of the Markov chain is either lower or upper diagonal, it is shown that the system of conditional density equations is finite-dimensional computable. These findings are then applied to a fault detection problem to compute state estimates that include the failure time.
IEEE Transactions on Automatic Control | 1997
Charalambos D. Charalambous; Robert J. Elliott
This paper is concerned with partially observed stochastic optimal control problems when nonlinearities enter the dynamics of the unobservable state and the observations as gradients of potential functions. Explicit representations for the information state are derived in terms of a finite number of sufficient statistics. Consequently, the partially observed problem is recast as one of complete information with a new state generated by a modified version of the Kalman filter. When the terminal cost is quadratic in the unobservable state and includes the integral of the nonlinearities, the optimal control laws are explicitly computed, similar to linear-exponential-quadratic-Gaussian (LEQG) and linear-quadratic-Gaussian (LQG) tracking problems. The results are applicable to filtering and control of Hamiltonian systems.
IEEE Transactions on Automatic Control | 1997
Charalambos D. Charalambous
In this paper, we consider continuous-time partially observable optimal control problems with exponential-of-integral cost criteria. We derive a rigorous verification theorem when the state and control enter nonlinear in the dynamics. In addition, we show that the quadratic sensor problem is estimation-solvable with respect to a certain cost criterion. The framework relies on dynamic programming and the Hamilton-Jacobi theory.
conference on decision and control | 1999
Charalambos D. Charalambous; Nickie Menemenlis
This paper discusses the use of stochastic differential equations and point processes to model the long-term fading effects during transmission of electromagnetic waves over large areas, which are subject to multipaths and power loss due to long distance transmission and reflections. When measured in dBs, the power loss follows a mean reverting Ornstein-Uhlenbeck process, which implies that the power loss is log-normally distributed. The arrival times of different paths are modeled using non-homogeneous Poisson counting processes and their statistical properties of the multipath power loss are examined. The moment generating function of the received signal is calculated and subsequently exploited to derive a central limit theorem, and the second-order statistics of the channel.
conference on decision and control | 2001
Minyi Huang; Peter E. Caines; Charalambos D. Charalambous
This paper considers power control for log-normal fading channels. A rate based power set point control model and an associated performance measure are introduced. Within this framework, a stochastic optimal power control law exists and the associated value function satisfies a degenerate HJB equation in a viscosity solution sense. The HJB equation is approximated by a uniformly parabolic second order equation which has a classical solution and a suboptimal control is derived. The suboptimal control is more realistic for practical implementation.
american control conference | 2001
Minyi Huang; Peter E. Caines; Charalambos D. Charalambous; Roland P. Malhamé
This paper considers power control for log-normal fading channels. A rate based power set point control model and an associated performance measure are introduced. Within this framework, a unique stochastic optimal power control law exists, and the value function of the optimal control is a viscosity solution to the corresponding HJB equation. An interpretation of the nature of the resulting control laws is presented.
Siam Journal on Control and Optimization | 2013
N.U. Ahmed; Charalambos D. Charalambous
In this paper, we consider nonconvex control problems of stochastic differential equations driven by relaxed controls adapted, in the weak star sense, to a current of sigma algebras generated by observable processes. We cover in a unified way both continuous diffusion and jump processes. We present existence of optimal controls before we construct the necessary conditions of optimality (unlike some papers in this area) using only functional analysis. We develop a stochastic Hamiltonian system of equations on a rigorous basis using the semimartingale representation theory and the Riesz representation theorem, leading naturally to the existence of the adjoint process which satisfies a backward stochastic differential equation. In other words, our approach predicts the existence of the adjoint process as a natural consequence of Riesz representation theory ensuring at the same time the (weak star) measurability. This is unlike other papers, where the adjoint process is introduced before its existence is prov...