Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kurt Helmes is active.

Publication


Featured researches published by Kurt Helmes.


Operations Research | 2001

Computing Moments of the Exit Time Distribution for Markov Processes by Linear Programming

Kurt Helmes; Stefan Röhl; Richard H. Stockbridge

We provide a new approach to the numerical computation of moments of the exit time distribution of Markov processes. The method relies on a linear programming formulation of a process exiting from a bounded domain. The LP formulation characterizes the evolution of the process through the moments of the induced occupation measure and naturally provides upper and lower bounds for the exact values of the moments. The conditions the moments have to satisfy are derived directly from the generator of the Markov process and are not based on some approximation of the process. Excellent software is readily available because the computations involve finite dimensional linear programs.


Journal of Optimization Theory and Applications | 2000

Numerical comparison of controls and verification of optimality for Stochastic control problems

Kurt Helmes; Richard H. Stockbridge

We provide two approaches to the numerical analysis of stochastic control problems. The analyses rely on linear programming formulations of the control problem and allow numerical comparison between controls and numerical verification of optimality. The formulations characterize the processes through the moments of the induced occupation measures. We deal directly with the processes rather than with some approximation to the processes. Excellent software is readily available, since the computations involve finite-dimensional linear programs.


European Journal of Operational Research | 2013

Optimal advertising and pricing in a class of general new-product adoption models

Kurt Helmes; Rainer Schlosser; Martin Weber

In [21], Sethi et al. introduced a particular new-product adoption model. They determine optimal advertising and pricing policies of an associated deterministic infinite horizon discounted control problem. Their analysis is based on the fact that the corresponding Hamilton–Jacobi–Bellman (HJB) equation is an ordinary non-linear differential equation which has an analytical solution. In this paper, generalizations of their model are considered. We take arbitrary adoption and saturation effects into account, and solve finite and infinite horizon discounted variations of associated control problems. If the horizon is finite, the HJB-equation is a 1st order non-linear partial differential equation with specific boundary conditions. For a fairly general class of models we show that these partial differential equations have analytical solutions. Explicit formulas of the value function and the optimal policies are derived. The controlled Bass model with isoelastic demand is a special example of the class of controlled adoption models to be examined and will be analyzed in some detail.


Advances in Applied Probability | 2010

Construction of the value function and optimal rules in optimal stopping of one-dimensional diffusions

Kurt Helmes; Richard H. Stockbridge

A new approach to the solution of optimal stopping problems for one-dimensional diffusions is developed. It arises by imbedding the stochastic problem in a linear programming problem over a space of measures. Optimizing over a smaller class of stopping rules provides a lower bound on the value of the original problem. Then the weak duality of a restricted form of the dual linear program provides an upper bound on the value. An explicit formula for the reward earned using a two-point hitting time stopping rule allows us to prove strong duality between these problems and, therefore, allows us to either optimize over these simpler stopping rules or to solve the restricted dual program. Each optimization problem is parameterized by the initial value of the diffusion and, thus, we are able to construct the value function by solving the family of optimization problems. This methodology requires little regularity of the terminal reward function. When the reward function is smooth, the optimal stopping locations are shown to satisfy the smooth pasting principle. The procedure is illustrated using two examples.


Stochastics An International Journal of Probability and Stochastic Processes | 1982

Optimal control for a class of partially observable systems

N. Christopeit; Kurt Helmes

A continuous time stochastic system with linear dynamics and linear observation equation has to be steered in such a way that the current predicted miss distance of the state to a given hyperplane, evaluated by some cost functional over a finite time interval, is minimized. it is shown that in the class of all controls taking values in the unit cube and depending only on the past of the observation process, the optimal control is bang-bang and that the separation and certainty equivalence principles hold


Stochastics An International Journal of Probability and Stochastic Processes | 2007

Linear programming approach to the optimal stopping of singular stochastic processes

Kurt Helmes; Richard H. Stockbridge

Optimal stopping of stochastic processes having both absolutely continuous and singular behavior (with respect to time) can be equivalently formulated as an infinite-dimensional linear program over a collection of measures. These measures represent the occupation measures of the process (up to a stopping time) with respect to “regular time” and “singular time” and the distribution of the process when it is stopped. Such measures corresponding to the process and stopping time are characterized by an adjoint equation involving the absolutely continuous and singular generators of the process. This general linear programming formulation is shown to be numerically tractable through three examples, each of which seeks to determine the stopping rule for a perpetual lookback put option using different dynamics for the asset price. Exact solutions are determined in the cases that the asset prices are given by a drifted Brownian motion and a geometric Brownian motion. Numerical results for the more realistic model of a regime switching geometric Brownian motion are also presented, demonstrating that the linear programming methodology is numerically tractable for models whose theoretical solutions are very difficult to obtain.


Archive | 2008

Determining the Optimal Control of Singular Stochastic Processes Using Linear Programming

Kurt Helmes; Richard H. Stockbridge

Humboldt-Universitat zu Berlin and University of Wisconsin-Milwaukee Abstract: This paper examines the numerical implementation of a linear pro- gramming (LP) formulation of stochastic control problems involving singular stochastic processes. The decision maker has the ability to influence a diffusion process through the selection of its drift rate (a control that acts absolutely continuously in time) and may also decide to instantaneously move the process to some other level (a singular control). The first goal of the paper is to show that linear programming provides a viable approach to solving singular con- trol problems. A second goal is the determination of the absolutely continuous control from the LP results and is intimately tied to the particular numeri- cal implementation. The original stochastic control problem is equivalent to an infinite-dimensional linear program in which the variables are measures on appropriate bounded regions. The implementation method replaces the LP for- mulation involving measures by one involving the moments of the measures. This moment approach does not directly provide the optimal control in feed- back form of the current state. The second goal of this paper is to show that the feedback form of the optimal control can be obtained using sensitivity analysis.


IEEE Transactions on Automatic Control | 1992

The solution of a partially observed stochastic optimal control problem in terms of predicted miss

Kurt Helmes; Raymond Rishel

The explicit solution of a partially observed LQ problem driven by a combination of a Wiener process and an unobserved finite-state jump Markov process is given. Applications of the model include guidance problems, where the jump Markov process models evasive maneuvers (acceleration values) of the target, or systems subject to a sequence of failures that can be modeled by a jump Markov process. >


Dynamic Games and Applications | 2015

Oligopoly Pricing and Advertising in Isoelastic Adoption Models

Kurt Helmes; Rainer Schlosser

This paper deals with deterministic dynamic pricing and advertising differential games which are stylized models of special durable-good oligopoly markets. We analyze infinite horizon models with constant price and advertising elasticities of demand in the cases of symmetric and asymmetric firms. In particular, we consider general saturation/adoption effects. These effects are modeled as transformations of the sum of the cumulative sales of all competing firms. We specify a necessary and sufficient condition such that a unique Markovian Nash equilibrium for such games exist. For two classes of models we derive solution formulas of the optimal policies and of the value functions, and we show how to compute the evolution of the cumulative sales of each firm. The analysis of these games reveals that the existence of the Nash equilibrium relies on the possibility to separate a component, which is specific for each firm, from a [market] component, which is the same for all firms. The common factor is a function of the decreasing untapped market size. The individual factor of each firm reflects its individual market power and has an impact on equilibrium prices; each such coefficient depends on the price elasticities, unit costs, arrival rates, and discount factors of all competing companies. Formulas for these coefficients reveal how equilibrium prices depend on the number of competing firms, and how the entry or exit of a firm affects the price structure of the oligopoly.


Stochastic Models | 2003

Extension Of Dale's Moment Conditions With Application To The Wright–fisher Model

Kurt Helmes; Richard H. Stockbridge

Dales necessary and sufficient conditions for an array to contain the joint moments for some probability distribution on the unit simplex in R2 are extended to the unit simplex in R d . These conditions are then used in a computational method, based on linear programming, to evaluate the stationary distribution for the diffusion approximation of the Wright–Fisher model in population genetics. The computational method uses a characterization of the diffusion through an adjoint relation between the diffusion operator and its stationary distribution. Application of this adjoint relation to a set of functions in the domain of the generator leads to one set of constraints for the linear program involving the moments of the stationary distribution. The extension of Dales conditions on the moments add another set of linear conditions and the linear program is solved to obtain bounds on numerical quantities of interest. Numerical illustrations are given to illustrate the accuracy of the method. #This research is partially supported by NSF under grant DMS 9803490.

Collaboration


Dive into the Kurt Helmes's collaboration.

Top Co-Authors

Avatar

Richard H. Stockbridge

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Chao Zhu

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Torsten Templin

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

V.E. Benes

University of Kentucky

View shared research outputs
Top Co-Authors

Avatar

Stefan Röhl

Vorarlberg University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge