Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aristotle Arapostathis is active.

Publication


Featured researches published by Aristotle Arapostathis.


Siam Journal on Control and Optimization | 1993

Discrete-time controlled Markov processes with average cost criterion: a survey

Aristotle Arapostathis; Vivek S. Borkar; Mrinal K. Ghosh; Steven I. Marcus

This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.


Siam Journal on Control and Optimization | 1993

Optimal control of switching diffusions with application to flexible manufacturing systems

Mrinal K. Ghosh; Aristotle Arapostathis; Steven I. Marcus

A controlled switching diffusion model is developed to study the hierarchical control of flexible manufacturing systems. The existence of a homogeneous Markov nonrandomized optimal policy is established by a convex analytic method. Using the existence of such a policy, the existence of a unique solution in a certain class to the associated Hamilton-Jacobi-Bellman equations is established and the optimal policy is characterized as a minimizing selector of an appropriate Hamiltonian.


Siam Journal on Control and Optimization | 1997

Ergodic Control of Switching Diffusions

Mrinal K. Ghosh; Aristotle Arapostathis; Steven I. Marcus

We study the ergodic control problem of switching diffusions representing a typical hybrid system that arises in numerous applications such as fault-tolerant control systems, flexible manufacturing systems, etc. Under fairly general conditions, we establish the existence of a stable, nonrandomized Markov policy which almost surely minimizes the pathwise long-run average cost. We then study the corresponding Hamilton--Jacobi--Bellman (HJB) equation and establish the existence of a unique solution in a certain class. Using this, we characterize the optimal policy as a minimizing selector of the Hamiltonian associated with the HJB equations. As an example we apply the results to a failure-prone manufacturing system and obtain closed form solutions for the optimal policy.


International Journal of Control | 1987

Simple sliding mode control scheme applied to robot manipulators

Eric Bailey; Aristotle Arapostathis

We present a simple sliding mode control scheme for robot manipulators that does not rely upon the construction of individually stable discontinuity surfaces, thus greatly reducing the complexity of design. We utilize the structure of the manipulator dynamics and Lyapunovs second method in order to establish a sliding surface on the intersection of the switching surfaces in a direct manner. A simple numerical example accompanies the theoretical development.


Annals of Operations Research | 1991

On the average cost optimality equation and the structure of optimal policies for partially observable Markov decision processes

Emmanuel Franández-Gaucherand; Aristotle Arapostathis; Steven I. Marcus

We consider partially observable Markov decision processes with finite or countably infinite (core) state and observation spaces and finite action set. Following a standard approach, an equivalent completely observed problem is formulated, with the same finite action set but with anuncountable state space, namely the space of probability distributions on the original core state space. By developing a suitable theoretical framework, it is shown that some characteristics induced in the original problem due to the countability of the spaces involved are reflected onto the equivalent problem. Sufficient conditions are then derived for solutions to the average cost optimality equation to exist. We illustrate these results in the context of machine replacement problems. Structural properties for average cost optimal policies are obtained for a two state replacement problem; these are similar to results available for discount optimal policies. The set of assumptions used compares favorably to others currently available.


Mathematics of Control, Signals, and Systems | 1990

Analysis of an identification algorithm arising in the adaptive estimation of Markov chains

Aristotle Arapostathis; Steven I. Marcus

We investigate an algorithm applied to the adaptive estimation of partially observed finite-state Markov chains. The algorithm utilizes the recursive equation characterizing the conditional distribution of the state of the Markov chain, given the past observations. We show that the process “driving” the algorithm has a unique invariant measure for each fixed value of the parameter, and following the ordinary differential equation method for stochastic approximations, establish almost sure convergence of the parameter estimates to the solutions of an associated differential equation. The performance of the adaptive estimation scheme is analyzed by examining the induced controlled Markov process with respect to a long-run average cost criterion.


IEEE Transactions on Automatic Control | 2003

Controlled Markov chains with safety upper bound

Aristotle Arapostathis; Ratnesh Kumar; Sekhar Tangirala

In this note, we introduce and study the notion of safety control of stochastic discrete-event systems (DESs), modeled as controlled Markov chains. For nonstochastic DESs modeled by state machines or automata, safety is specified as a set of forbidden states, or equivalently by a binary valued vector that imposes an upper bound on the set of states permitted to be visited. We generalize this notion of safety to the setting of stochastic DESs by specifying it as an unit-interval valued vector that imposes an upper bound on the state probability distribution vector. Under the assumption of complete state observation, we identify: 1) the set of all state feedback controllers that satisfy the safety requirement for any given safe initial state probability distribution, and 2) the set of all safe initial state probability distributions for a given state feedback controller.


IEEE Transactions on Automatic Control | 1992

A sufficient condition for local controllability of nonlinear systems along closed orbits

Kwanghee Nam; Aristotle Arapostathis

A computable sufficient condition for determining local controllability of a differential nonlinear control system along a reference closed orbit is presented. Comparisons to other results on local controllability along a reference trajectory are made. >


Systems & Control Letters | 2006

On the existence of stationary optimal policies for partially observed MDPs under the long-run average cost criterion

Shun-Pin Hsu; Dong-Ming Chuang; Aristotle Arapostathis

Abstract This paper studies the problem of the existence of stationary optimal policies for finite state controlled Markov chains, with compact action space and imperfect observations, under the long-run average cost criterion. It presents sufficient conditions for existence of solutions to the associated dynamic programming equation, that strengthen past results. There is a detailed discussion comparing the different assumptions commonly found in the literature.


Communications in Partial Differential Equations | 1999

Harnack's inequality for cooperative weakly coupled elliptic systems

Aristotle Arapostathis; Mrinal K. Ghosh; Steven I. Marcus

Abstract : The authors consider cooperative, uniformly elliptic systems, with bounded coefficients and coupling in the zeroth-order terms. They establish two analogs of Harnacks inequality for this class of system. A weak version is obtained under fairly general conditions, while imposing an irreducibility condition on the coupling coefficients results in a stronger version of the inequality. This irreducibility condition also is necessary for the existence of a Harnack constant for this class of systems. A Harnack inequality also is obtained for a class of superharmonic functions.

Collaboration


Dive into the Aristotle Arapostathis's collaboration.

Top Co-Authors

Avatar

Mrinal K. Ghosh

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Pravin Varaiya

University of California

View shared research outputs
Top Co-Authors

Avatar

Shun-Pin Hsu

National Chung Hsing University

View shared research outputs
Top Co-Authors

Avatar

Hong-Gi Lee

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Kyun K. Lee

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Edward J. Powers

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Enrique Sernik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sekhar Tangirala

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge