Wendell H. Fleming
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wendell H. Fleming.
Journal of the Royal Statistical Society. Series A (General) | 1975
Wendell H. Fleming; Raymond Rishel
I The Simplest Problem in Calculus of Variations.- 1. Introduction.- 2. Minimum Problems on an Abstract Space-Elementary Theory.- 3. The Euler Equation Extremals.- 4. Examples.- 5. The Jacobi Necessary Condition.- 6. The Simplest Problem in n Dimensions.- II The Optimal Control Problem.- 1. Introduction.- 2. Examples.- 3. Statement of the Optimal Control Problem.- 4. Equivalent Problems.- 5. Statement of Pontryagins Principle.- 6. Extremals for the Moon Landing Problem.- 7. Extremals for the Linear Regulator Problem.- 8. Extremals for the Simplest Problem in Calculus of Variations.- 9. General Features of the Moon Landing Problem.- 10. Summary of Preliminary Results.- 11. The Free Terminal Point Problem.- 12. Preliminary Discussion of the Proof of Pontryagins Principle.- 13. A Multiplier Rule for an Abstract Nonlinear Programming Problem.- 14. A Cone of Variations for the Problem of Optimal Control.- 15. Verification of Pontryagins Principle.- III Existence and Continuity Properties of Optimal Controls.- 1. The Existence Problem.- 2. An Existence Theorem (Mayer Problem U Compact).- 3. Proof of Theorem 2.1.- 4. More Existence Theorems.- 5. Proof of Theorem 4.1.- 6. Continuity Properties of Optimal Controls.- IV Dynamic Programming.- 1. Introduction.- 2. The Problem.- 3. The Value Function.- 4. The Partial Differential Equation of Dynamic Programming.- 5. The Linear Regulator Problem.- 6. Equations of Motion with Discontinuous Feedback Controls.- 7. Sufficient Conditions for Optimality.- 8. The Relationship between the Equation of Dynamic Programming and Pontryagins Principle.- V Stochastic Differential Equations and Markov Diffusion Processes.- 1. Introduction.- 2. Continuous Stochastic Processes Brownian Motion Processes.- 3. Itos Stochastic Integral.- 4. Stochastic Differential Equations.- 5. Markov Diffusion Processes.- 6. Backward Equations.- 7. Boundary Value Problems.- 8. Forward Equations.- 9. Linear System Equations the Kalman-Bucy Filter.- 10. Absolutely Continuous Substitution of Probability Measures.- 11. An Extension of Theorems 5.1,5.2.- VI Optimal Control of Markov Diffusion Processes.- 1. Introduction.- 2. The Dynamic Programming Equation for Controlled Markov Processes.- 3. Controlled Diffusion Processes.- 4. The Dynamic Programming Equation for Controlled Diffusions a Verification Theorem.- 5. The Linear Regulator Problem (Complete Observations of System States).- 6. Existence Theorems.- 7. Dependence of Optimal Performance on y and ?.- 8. Generalized Solutions of the Dynamic Programming Equation.- 9. Stochastic Approximation to the Deterministic Control Problem.- 10. Problems with Partial Observations.- 11. The Separation Principle.- Appendices.- A. Gronwall-Bellman Inequality.- B. Selecting a Measurable Function.- C. Convex Sets and Convex Functions.- D. Review of Basic Probability.- E. Results about Parabolic Equations.- F. A General Position Lemma.
Siam Journal on Control and Optimization | 1995
Wendell H. Fleming; William M. McEneaney
Stochastic control problems on an infinite time horizon with exponential cost criteria are considered. The Donsker--Varadhan large deviation rate is used as a criterion to be optimized. The optimum rate is characterized as the value of an associated stochastic differential game, with an ergodic (expected average cost per unit time) cost criterion. If we take a small-noise limit, a deterministic differential game with average cost per unit time cost criterion is obtained. This differential game is related to robust control of nonlinear systems.
Journal of Economic Dynamics and Control | 1997
Darrell Duffie; Wendell H. Fleming; H. Mete Soner; Thaleia Zariphopoulou
In the context of Merton’s original problem of optimal consumption and portfolio choice in continuous time, this paper solves an extension in which the investor is endowed with a stochastic income that cannot be replicated by trading the available securities. The problem is treated by demonstrating, using analytic and, in particular, ‘viscosity solutions’ techniques, that the value function of the stochastic control problem is a smooth solution of the associated Hamilton-Jacobi-Bellman (HJB) equation. The optimal policy is shown to exist and given in a feedback form from the optimality conditions in the HJB equation. At zero wealth, a fixed fraction of income is consumed. For ‘large’ wealth, the original Merton policy is approached. We also give a sufficient condition for wealth, under the optimal policy, to remain strictly positive.
Journal of Mathematical Biology | 1975
Wendell H. Fleming
SummaryWe consider a model with two types of genes (alleles) A1 A2. The population lives in a bounded habitat R, contained in r-dimensional space (r= 1, 2, 3). Let u (t, x) denote the frequency of A1 at time t and place x ɛ R. Then u (t, x) is assumed to obey a nonlinear parabolic partial differential equation, describing the effects of population dispersal within R and selective advantages among the three possible genotypes A1A1, A1A2, A2A2. It is assumed that the selection coefficients vary over R, so that a selective advantage at some points x becomes a disadvantage at others. The results concern the existence, stability properties, and bifurcation phenomena for equilibrium solutions.
Applied Mathematics and Optimization | 1977
Wendell H. Fleming
This paper is concerned with Markov diffusion processes which obey stochastic differential equations depending on a small parameterε. The parameter enters as a coefficient in the noise term of the stochastic differential equation. The Ventcel-Freidlin estimates give asymptotic formulas (asε→0) for such quantities as the probability of exit from a regionD through a given portionN of the boundary ∂D, the mean exit time, and the probability of exit by a given timeT. A new method to obtain such estimates is given, using ideas from stochastic control theory.
Mathematical Finance | 2000
Wendell H. Fleming; Shuenn-Jyi Sheu
We consider an optimal investment model in which the goal is to maximize the long-term growth rate of expected utility of wealth. In the model, the mean returns of the securities are explicitly affected by the underlying economic factors. The utility function is HARA. The problem is reformulated as an infinite time horizon risk-sensitive control problem. We study the dynamic programming equation associated with this control problem and derive some consequences of the investment problem.
Siam Journal on Control and Optimization | 1982
Wendell H. Fleming; Etienne Pardoux
Stochastic control problems are considered in which a state process
Siam Journal on Control and Optimization | 1987
Wendell H. Fleming; Suresh P. Sethi; Halil Mete Soner
X_t
Finance and Stochastics | 2003
Wendell H. Fleming; Daniel Hernández-Hernández
and an observation process
Siam Journal on Control | 1968
Wendell H. Fleming
Y_t