Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ulrich Rieder is active.

Publication


Featured researches published by Ulrich Rieder.


IEEE Transactions on Automatic Control | 2004

Portfolio optimization with Markov-modulated stock prices and interest rates

Nicole Bäuerle; Ulrich Rieder

A financial market with one bond and one stock is considered where the risk free interest rate, the appreciation rate of the stock and the volatility of the stock depend on an external finite state Markov chain. We investigate the problem of maximizing the expected utility from terminal wealth and solve it by stochastic control methods for different utility functions. Due to explicit solutions it is possible to compare the value function of the problem to one where we have constant (average) market data. The case of benchmark optimization is also considered.


Mathematics of Operations Research | 2014

More Risk-Sensitive Markov Decision Processes

Nicole Bäuerle; Ulrich Rieder

We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process MDP. In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed-point equation using a “sandwich” argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity and convergence of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.


Mathematical Methods of Operations Research | 1991

Structural results for partially observed control models

Ulrich Rieder

A general partially observed control model with discrete time parameter is investigated. Our main interest concerns monotonicity results and bounds for the value functions and for optimal policies. In particular, we show how the value functions depend on the observation kernels and we present conditions for a lower bound of an optimal policy. Our approach is based on two multivariate stochastic orderings: theTP2 ordering and the Blackwell ordering.


Finance and Stochastics | 2009

MDP algorithms for portfolio optimization problems in pure jump markets

Nicole Bäuerle; Ulrich Rieder

We consider the problem of maximizing the expected utility of the terminal wealth of a portfolio in a continuous-time pure jump market with general utility function. This leads to an optimal control problem for piecewise deterministic Markov processes. Using an embedding procedure we solve the problem by looking at a discrete-time contracting Markov decision process. Our aim is to show that this point of view has a number of advantages, in particular as far as computational aspects are concerned. We characterize the value function as the unique fixed point of the dynamic programming operator and prove the existence of optimal portfolios. Moreover, we show that value iteration as well as Howard’s policy improvement algorithm works. Finally, we give error bounds when the utility function is approximated and when we discretize the state space. A numerical example is presented and our approach is compared to the approximating Markov chain method.


Queueing Systems | 2000

Optimal control of single-server fluid networks

Nicole Bäuerle; Ulrich Rieder

We consider a stochastic single-server fluid network with both a discounted reward and a cost structure. It can be shown that the optimal policy is a priority index policy. The indices coincide with the optimal indices in a semi-Markovian Klimov problem. Several special cases like single-server reentrant fluid lines are considered. The approach we use is based on sample path arguments and Pontryagins maximum principle.


Archive | 1991

Non-Cooperative Dynamic Games with General Utility Functions

Ulrich Rieder

The present paper is concerned with a general non-cooperative two-person dynamic game with Borel state and action spaces, non-Markovian transition law and with utility functions depending on the whole sequence of states and actions. The motivation for a general utility function is that in several problems in economic theory, additivity or separability of the utility function is a restrictive assumption and hard to justify, e.g. in problems of consumption and production choices over time and in the closely related problems of optimal economic growth. Dynamic games with additive utility functions have been introduced by Shapley [22] and have then been investigated by many authors (see the survey paper of Parthasarathy and Stern [16] or Kiienle [9]). In recent years several authors have considered dynamic games with more general utility functions, e.g. Sengupta [21], Iwamoto [7], Schal [19].


Mathematical Methods of Operations Research | 1997

Markov Games with Incomplete Information

Alexander Krausz; Ulrich Rieder

We consider zero-sum Markov games with incomplete information. Here, the second player is never informed about the current state of the underlying Markov chain. The existence of a value and of optimal strategies for both players is shown. In particular, we present finite algorithms for computing optimal strategies for the informed and uninformed player. The algorithms are based on linear programming results.


Mathematical Methods of Operations Research | 1994

Monotonicity and bounds for convex stochastic control models

Ulrich Rieder; Rudi Zagst

We consider a general convex stochastic control model. Our main interest concerns monotonicity results and bounds for the value functions and for optimal policies. In particular, we show how the value functions depend on the transition kernels and we present conditions for a lower bound of an optimal policy. Our approach is based on convex stochastic orderings of probability measures. We derive several interesting sufficient conditions of these ordering concepts, where we make also use of the Blackwell ordering. The structural results are illustrated by partially observed control models and Bayesian information models.


Annals of Operations Research | 1991

Structured policies in the sequential design of experiments

Ulrich Rieder; Hartmut Wagner

A general control model under uncertainty is considered. Using a Bayesian approach and dynamic programming, we investigate structural properties of optimal decision rules. In particular, we show the monotonicity of the total expected reward and of the so-called Gittins-Index. We extend the stopping rule and the stay-on-a-winner rule, which are well-known in bandit problems. Our approach is based on the multivariate likelihood ratio order andTP2 functions.


Mathematical Methods of Operations Research | 2009

Optimal control of Markovian jump processes with partial information and applications to a parallel queueing model

Ulrich Rieder; Jens Winter

We consider a stochastic control problem over an infinite horizon where the state process is influenced by an unobservable environment process. In particular, the Hidden-Markov-model and the Bayesian model are included. This model under partial information is transformed into an equivalent one with complete information by using the well-known filter technique. In particular, the optimal controls and the value functions of the original and the transformed problem are the same. An explicit representation of the filter process which is a piecewise-deterministic process, is also given. Then we propose two solution techniques for the transformed model. First, a generalized verification technique (with a generalized Hamilton–Jacobi–Bellman equation) is formulated where the strict differentiability of the value function is weaken to local Lipschitz continuity. Second, we present a discrete-time Markovian decision model by which we are able to compute an optimal control of our given problem. In this context we are also able to state a general existence result for optimal controls. The power of both solution techniques is finally demonstrated for a parallel queueing model with unknown service rates. In particular, the filter process is discussed in detail, the value function is explicitly computed and the optimal control is completely characterized in the symmetric case.

Collaboration


Dive into the Ulrich Rieder's collaboration.

Top Co-Authors

Avatar

Karl Hinderer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Stieglitz

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nicole Bäuerle

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge