Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard H. Stockbridge is active.

Publication


Featured researches published by Richard H. Stockbridge.


Siam Journal on Control and Optimization | 1998

Existence of Markov Controls and Characterization of Optimal Markov Controls

Thomas G. Kurtz; Richard H. Stockbridge

Given a solution of a controlled martingale problem it is shown under general conditions that there exists a solution having Markov controls which has the same cost as the original solution. This result is then used to show that the original stochastic control problem is equivalent to a linear program over a space of measures under a variety of optimality criteria. Existence and characterization of optimal Markov controls then follows. An extension of Echeverrias theorem characterizing stationary distributions for (uncontrolled) Markov processes is obtained as a corollary. In particular, this extension covers diffusion processes with discontinuous drift and diffusion coefficients.


Operations Research | 2001

Computing Moments of the Exit Time Distribution for Markov Processes by Linear Programming

Kurt Helmes; Stefan Röhl; Richard H. Stockbridge

We provide a new approach to the numerical computation of moments of the exit time distribution of Markov processes. The method relies on a linear programming formulation of a process exiting from a bounded domain. The LP formulation characterizes the evolution of the process through the moments of the induced occupation measure and naturally provides upper and lower bounds for the exact values of the moments. The conditions the moments have to satisfy are derived directly from the generator of the Markov process and are not based on some approximation of the process. Excellent software is readily available because the computations involve finite dimensional linear programs.


Siam Journal on Control and Optimization | 2011

On Optimal Harvesting Problems in Random Environments

Qingshuo Song; Richard H. Stockbridge; Chao Zhu

This paper investigates the optimal harvesting strategy for a single species living in random environments whose population growth is given by a regime-switching diffusion. Harvesting acts as a (stochastic) control on the size of the population. The objective is to find a harvesting strategy which maximizes the expected total discounted income from harvesting up to the time of extinction of the species; the income rate is allowed to be state- and environment-dependent. This is a singular stochastic control problem, with both the extinction time and the optimal harvesting policy depending on the initial condition. One aspect of receiving payments up to the random time of extinction is that small changes in the initial population size may significantly alter the extinction time when using the same harvesting policy. Consequently, one no longer obtains continuity of the value function using standard arguments for either regular or singular control problems having a fixed time horizon. This paper introduces a new sufficient condition under which the continuity of the value function for the regime-switching model is established. Further, it is shown that the value function is a viscosity solution of a coupled system of quasi-variational inequalities. The paper also establishes a verification theorem and, based on this theorem, an


Siam Journal on Control and Optimization | 2001

Linear Programming Formulation for Optimal Stopping Problems

Moon Jung Cho; Richard H. Stockbridge

\varepsilon


Journal of Optimization Theory and Applications | 2000

Numerical comparison of controls and verification of optimality for Stochastic control problems

Kurt Helmes; Richard H. Stockbridge

-optimal harvesting strategy is constructed under certain conditions on the model. Two examples are analyzed in detail.


Siam Journal on Control and Optimization | 1998

Approximation of Infinite-Dimensional Linear Programming Problems which Arise in Stochastic Control

Marta S. Mendiondo; Richard H. Stockbridge

Optimal stopping problems for continuous time Markov processes are shown to be equivalent to infinite-dimensional linear programs over a space of pairs of measures under very general conditions. The measures involved represent the joint distribution of the stopping time and stopping location and the occupation measure of the process until it is stopped. These measures satisfy an identity for each function in the domain of the generator which is sufficient to characterize the stochastic process. Finite-dimensional linear programs obtained using Markov chain approximations are solved in two examples to illustrate the numerical accuracy of the linear programming formulation.


Siam Journal on Control and Optimization | 1991

Optimal control of the running max

Arthur C. Heinricher; Richard H. Stockbridge

We provide two approaches to the numerical analysis of stochastic control problems. The analyses rely on linear programming formulations of the control problem and allow numerical comparison between controls and numerical verification of optimality. The formulations characterize the processes through the moments of the induced occupation measures. We deal directly with the processes rather than with some approximation to the processes. Excellent software is readily available, since the computations involve finite-dimensional linear programs.


Advances in Applied Probability | 2010

Construction of the value function and optimal rules in optimal stopping of one-dimensional diffusions

Kurt Helmes; Richard H. Stockbridge

We study a general approximation scheme for infinite-dimensional linear programming (LP) problems which arise naturally in stochastic control. We prove that the optimal value of the approximating problems converges to the value of the original LP problem. For the controls, we show that if the approximating optimal controls converge, the limiting control is an optimal control for the original LP problem. As an application of this theory, we present numerical approximations to the LP formulation of stochastic control problems in continuous time. We study long-term average and discounted control problems. For the example for which the theoretical solution is known, our approximation results are very accurate.


Stochastics An International Journal of Probability and Stochastic Processes | 2012

On the existence of strict optimal controls for constrained, controlled Markov processes in continuous time

François Dufour; Richard H. Stockbridge

A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. Such processes are appealing models for physical processes that evolve in a continuous and increasing manner. Dynamic programming conditions of optimality for these nonstandard problems are investigated and applied to particular examples.


Stochastics An International Journal of Probability and Stochastic Processes | 2007

Linear programming approach to the optimal stopping of singular stochastic processes

Kurt Helmes; Richard H. Stockbridge

A new approach to the solution of optimal stopping problems for one-dimensional diffusions is developed. It arises by imbedding the stochastic problem in a linear programming problem over a space of measures. Optimizing over a smaller class of stopping rules provides a lower bound on the value of the original problem. Then the weak duality of a restricted form of the dual linear program provides an upper bound on the value. An explicit formula for the reward earned using a two-point hitting time stopping rule allows us to prove strong duality between these problems and, therefore, allows us to either optimize over these simpler stopping rules or to solve the restricted dual program. Each optimization problem is parameterized by the initial value of the diffusion and, thus, we are able to construct the value function by solving the family of optimization problems. This methodology requires little regularity of the terminal reward function. When the reward function is smooth, the optimal stopping locations are shown to satisfy the smooth pasting principle. The procedure is illustrated using two examples.

Collaboration


Dive into the Richard H. Stockbridge's collaboration.

Top Co-Authors

Avatar

Kurt Helmes

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Chao Zhu

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas G. Kurtz

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin G. Vieten

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Bruce A. Wade

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

George A. Rus

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Hans Volkmer

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Piotr Kaczmarek

University of Wisconsin–Milwaukee

View shared research outputs
Researchain Logo
Decentralizing Knowledge