Raymond Rishel
University of Kentucky
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raymond Rishel.
Journal of the Royal Statistical Society. Series A (General) | 1975
Wendell H. Fleming; Raymond Rishel
I The Simplest Problem in Calculus of Variations.- 1. Introduction.- 2. Minimum Problems on an Abstract Space-Elementary Theory.- 3. The Euler Equation Extremals.- 4. Examples.- 5. The Jacobi Necessary Condition.- 6. The Simplest Problem in n Dimensions.- II The Optimal Control Problem.- 1. Introduction.- 2. Examples.- 3. Statement of the Optimal Control Problem.- 4. Equivalent Problems.- 5. Statement of Pontryagins Principle.- 6. Extremals for the Moon Landing Problem.- 7. Extremals for the Linear Regulator Problem.- 8. Extremals for the Simplest Problem in Calculus of Variations.- 9. General Features of the Moon Landing Problem.- 10. Summary of Preliminary Results.- 11. The Free Terminal Point Problem.- 12. Preliminary Discussion of the Proof of Pontryagins Principle.- 13. A Multiplier Rule for an Abstract Nonlinear Programming Problem.- 14. A Cone of Variations for the Problem of Optimal Control.- 15. Verification of Pontryagins Principle.- III Existence and Continuity Properties of Optimal Controls.- 1. The Existence Problem.- 2. An Existence Theorem (Mayer Problem U Compact).- 3. Proof of Theorem 2.1.- 4. More Existence Theorems.- 5. Proof of Theorem 4.1.- 6. Continuity Properties of Optimal Controls.- IV Dynamic Programming.- 1. Introduction.- 2. The Problem.- 3. The Value Function.- 4. The Partial Differential Equation of Dynamic Programming.- 5. The Linear Regulator Problem.- 6. Equations of Motion with Discontinuous Feedback Controls.- 7. Sufficient Conditions for Optimality.- 8. The Relationship between the Equation of Dynamic Programming and Pontryagins Principle.- V Stochastic Differential Equations and Markov Diffusion Processes.- 1. Introduction.- 2. Continuous Stochastic Processes Brownian Motion Processes.- 3. Itos Stochastic Integral.- 4. Stochastic Differential Equations.- 5. Markov Diffusion Processes.- 6. Backward Equations.- 7. Boundary Value Problems.- 8. Forward Equations.- 9. Linear System Equations the Kalman-Bucy Filter.- 10. Absolutely Continuous Substitution of Probability Measures.- 11. An Extension of Theorems 5.1,5.2.- VI Optimal Control of Markov Diffusion Processes.- 1. Introduction.- 2. The Dynamic Programming Equation for Controlled Markov Processes.- 3. Controlled Diffusion Processes.- 4. The Dynamic Programming Equation for Controlled Diffusions a Verification Theorem.- 5. The Linear Regulator Problem (Complete Observations of System States).- 6. Existence Theorems.- 7. Dependence of Optimal Performance on y and ?.- 8. Generalized Solutions of the Dynamic Programming Equation.- 9. Stochastic Approximation to the Deterministic Control Problem.- 10. Problems with Partial Observations.- 11. The Separation Principle.- Appendices.- A. Gronwall-Bellman Inequality.- B. Selecting a Measurable Function.- C. Convex Sets and Convex Functions.- D. Review of Basic Probability.- E. Results about Parabolic Equations.- F. A General Position Lemma.
IEEE Transactions on Automatic Control | 1975
Raymond Rishel
Abstract-Control of stochastic differential equations of the form dot{x}=f^{r(t)}(t,x,u) in which r(t) is a fiie-state Markov p n m s is discussed Dynamic programming optimalityconditions are shown to be necessary and sufficient for oplimality. A stochastic minimom principle whose adjoints satisfy deterministic integral equations is defiied and shorn to be necessary and snffiaent for optimality.
Archive | 1975
Raymond Rishel
In Queueing Theory and many other fields problems of control arise for stochastic processes with piecewise constant paths. In this paper the validity of optimality conditions analagous to the Pontryagin Maximum Principle for deterministic control problems is investigated for this type of stochastic process. A minimum principle which involves the conditional jump rate, the conditional state jump distribution, system performance rate, and the conditional expectation of the remaining performance is obtained. The conditional expectation of the remaining performance plays the role of the adjoint variables. This conditional expectation satisfies a type of integral equation and an infinite system of ordinary differential equations.
IEEE Transactions on Automatic Control | 1991
Raymond Rishel
Models for wear in which wear is a continuous increasing stochastic process are set up. Optimal control problems for these models are posed and explicitly solved in one case. >
Stochastic Processes and their Applications | 1994
Robert J. Elliott; Raymond Rishel
Hidden Markov Models provide an adaptive method of estimating random quantities, that is, they not only consider the quantity under investigation but also revise the parameters of the model. Results of a recent paper are used to determine the implicit interest rate of an asset whose value is given by an equation in log-normal form.
IEEE Transactions on Automatic Control | 1986
Raymond Rishel; L. Harris
A variational approach is taken to derive optimality conditions for a discrete time linear quadratic adaptive stochastic optimal control problem. These conditions lead to an algorithm for computing optimal control laws which differs from the dynamic programming algorithm.
conference on decision and control | 1999
Raymond Rishel
External events which affect stock prices are modeled in terms of finite state Markov processes, which may jump at scheduled or unscheduled times. Models for stock prices depending on these external events are set up. These stock price models may have price jumps at the occurrence times of the external events. For these types of stock price models the problem of choosing a portfolio policy to maximize the expected utility of the portfolios value at a fixed terminal time is considered. Optimal portfolios for the cases of logarithmic and power utility functions are discussed.
IEEE Transactions on Automatic Control | 1992
Kurt Helmes; Raymond Rishel
The explicit solution of a partially observed LQ problem driven by a combination of a Wiener process and an unobserved finite-state jump Markov process is given. Applications of the model include guidance problems, where the jump Markov process models evasive maneuvers (acceleration values) of the target, or systems subject to a sequence of failures that can be modeled by a jump Markov process. >
Handbooks in Operations Research and Management Science | 1990
Raymond Rishel
Publisher Summary This chapter discusses the optimal control of continuous time Markov processes. The objective of the chapter is to focus on the typical techniques needed to calculate optimal controls for operations research problems modeled by continuous time controlled Markov processes. The chapter provides four prototype examples. Each of these examples illustrates both an operations research situation and a control problem for a different type of Markov process. The chapter discusses the dynamic programming sufficiency conditions for optimality for a fairly general class of controlled continuous time Markov processes. Then these sufficiency conditions are applied to the four examples. For the first two examples, optimal controls are computed in a straightforward manner. For the third example, the control computation is more complicated. For the last example, the form of the optimal control is still unknown. These examples indicate the status of the feasibility of determining optimal controls for continuous time Markov control problems.
Archive | 1975
Wendell H. Fleming; Raymond Rishel
In this chapter we shall discuss an optimization problem that we will call “the optimal control problem.” In the 1950’s, motivated especially by aerospace problems, engineers became interested in the problem of controlling a system governed by a set of differential equations. In many of the problems it was natural to want to control the system so that a given performance index would be minimized. In some aerospace problems large savings in cost could be obtained with a small improvement in performance so that optimal operation became very important. As techniques were developed which were practical for computation and implementation of optimal controls the use of this theory became common in a large number of fields. References which illustrate work typical in applying optimal control to economic problems are Burmeister-Dobell [1], Pindyck [1], Shell [1].