D. Castanon
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by D. Castanon.
International Journal of Control | 1986
Howard Jay Chizeck; Alan S. Willsky; D. Castanon
This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.
IEEE Transactions on Automatic Control | 1984
N. Lehtomaki; D. Castanon; Bernard C. Levy; Gunter Stein; Nils R. Sandell; Michael Athans
The results on robustness theory presented here are extensions of those given in [1]. The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error as well as its magnitude to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.
Stochastics An International Journal of Probability and Stochastic Processes | 1983
M. Coderch; Alan S. Willsky; Shankar Sastry; D. Castanon
In this paper we study the asymptotic behavior of Finite State Markov Processes with rare transitions. We show how to construct a sequence of increasingly simplified models of a singularly perturbed FSMP and how to combine these aggregated models to produce an asymptotic approximation of the original process uniformly valid over [0, ∞).
conference on decision and control | 1978
J.D. Birdwell; D. Castanon; Michael Athans
This paper contains an overview of a theoretical framework for the design of reliable multivariable control systems, with special emphasis on actuator failures and necessary actuator redundancy levels. Using a linear model of the system, with Markovian failure probabilities and quadratic performance index, an optimal stochastic control problem is posed and solved. The solution requires the iteration of a set of highly coupled Riccati-like matrix difference equations; if these converge one has a reliable design; if they diverge, the design is unreliable, and the system design cannot be stabilized. In addition, it is shown that the existence of a stabilizing constant feedback gain and the reliability of its implementation is equivalent to the convergence properties of a set of coupled Riccati-like matrix difference equations. In summary, these results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
conference on decision and control | 1981
N. Lehtomaki; D. Castanon; Bernard C. Levy; Gunter Stein; Nils R. Sandell; Michael Athans
The results on robustness theory presented here are extensions of those given in [1]. The basic innovation in these new results is that they utilize minimal additional information about the structure of the modelling error as well as its magnitude to assess the robustness of feedback systems for which robustness tests based on the magnitude of modelling error alone are inconclusive.
conference on decision and control | 1978
D. Castanon; Nils R. Sandell
This paper studies the well-known counter example of Witsenhausen when the initial uncertainty is small. Using an asymptotic approach, it is established that linear strategies are asymptotically optimal over a large class of nonlinear strategies. This serves as a guideline for optimal solutions of non-classical problems with very noisy communication channels.
conference on decision and control | 1979
D. Castanon; Nils R. Sandell
This paper studies the problem of designing optimal search trajectories for a single search platform. The admissible trajectories are constrained by physical restrictions in terms of dynamical models. Necessary conditions for optimality are obtained in terms of Gateaux differentials; these conditions can be used in iterative algorithms to find stationary points of the minimization.
conference on decision and control | 1977
D. Castanon; Nils R. Sandell
This paper discusses properties of solutions to a class of hierarchical optimization problems with different objectives for each level. The solution of these problems is known as the closed-loop Stackelberg solution. Two simple examples are solved to illustrate a fundametal non-convexity in the formulation of the optimization problem, and to highlight the dominant properties of the solutions.
IEEE Transactions on Automatic Control | 1977
Michael Athans; D. Castanon; K.-P. Dunn; C. S. Greene; W. H. Lee; Nils R. Sandell; Alan S. Willsky
IEEE Transactions on Automatic Control | 1983
M. Coderch; Alan S. Willsky; Shankar Sastry; D. Castanon