Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where O. J. Vrieze is active.

Publication


Featured researches published by O. J. Vrieze.


Mathematical Programming | 1991

Nonlinear programming and stationary equilibria in stochastic games

Jerzy A. Filar; Todd A. Schultz; Frank Thuijsman; O. J. Vrieze

Stationary equilibria in discounted and limiting average finite state/action space stochastic games are shown to be equivalent to global optima of certain nonlinear programs. For zero sum limiting average games, this formulation reduces to a linear objective, nonlinear constraints program, which finds the “best” stationary strategies, even whenε-optimal stationary strategies do not exist, for arbitrarily smallε.


Mathematics of Operations Research | 1996

Recursive repeated games with absorbing states

János Flesch; Frank Thuijsman; O. J. Vrieze

We show the existence of stationary limiting average e-equilibria e > 0 for two-person recursive repeated games with absorbing states. These are stochastic games where all states but one are absorbing, and in the nonabsorbing state all payoffs are equal to zero. A state is called absorbing if the probability of a transition to any other state is zero for all available pairs of actions. For the purpose of our proof, we introduce properness for stationary strategy pairs. Our result is sharp since it extends neither to the case with more nonabsorbing states, nor to the n-person case with n > 2. Moreover, it is well known that the result cannot be strengthened to the existence of 0-equilibria and that repeated games with absorbing states generally do not admit stationary e-equilibria.


Stochastic Games and Related Topics | 1991

Easy initial states in stochastic games

Frank Thuijsman; O. J. Vrieze

In this paper we deal with limiting average stochastic games with finite state and action spaces. For any nonzero-sum stochastic game of this type, there exists a subset of initial states for which an almost stationary ∈-equilibrium exists. For any zero-sum stochastic game there exists for each player a subset of initial states for which this player has an optimal stationary strategy.


Journal of Optimization Theory and Applications | 1998

Total reward stochastic games and sensitive average reward strategies

Frank Thuijsman; O. J. Vrieze

In this paper, total reward stochastic games are surveyed. Total reward games are motivated as a refinement of average reward games. The total reward is defined as the limiting average of the partial sums of the stream of payoffs. It is shown that total reward games with finite state space are strategically equivalent to a class of average reward games with an infinite countable state space. The role of stationary strategies in total reward games is investigated in detail. Further, it is outlined that, for total reward games with average reward value 0 and where additionally both players possess average reward optimal stationary strategies, it holds that the total reward value exists.


Stochastic and Differential Games; Theory and Numerical Methods | 1999

The Power of Threats in Stochastic Games

Frank Thuijsman; O. J. Vrieze

In the theory of limiting average reward infinitely repeated games the Folk theorem tells us that any feasible and individually rational reward can be achieved as an equilibrium reward. The standard proof of this theorem involves pure strategies that yield this reward and threats to prevent the opponent from deviating from his pure strategy. In stochastic games it is not always possible to apply threats in a similar fashion, since a deviation may take play to a different state at which punishment is ineffective. Nevertheless, threats allow us to formulate sufficient, and quite general, conditions for the existence of limiting average ∈-equilibria.


International Game Theory Review | 1999

MARKOV STRATEGIES ARE BETTER THAN STATIONARY STRATEGIES

János Flesch; Frank Thuijsman; O. J. Vrieze

We examine the use of stationary and Markov strategies in zero-sum stochastic games with finite state and action spaces. It is natural to evaluate a strategy for the maximising player, player 1, by the highest reward guaranteed to him against any strategy of the opponent. The highest rewards guaranteed by stationary strategies or by Markov strategies are called the stationary utility or the Markov utility, respectively. Since all stationary strategies are Markov strategies, the Markov utility is always larger or equal to the stationary utility. However, in all presently known subclasses of stochastic games, these utilities turn out to be equal. In this paper, we provide a colourful example in which the Markov utility is strictly larger than the stationary utility and we present several conditions under which the utilities are equal. We also show that each stochastic game has at least one initial state for which the two utilities are equal. Several examples clarify these issues.


Siam Journal on Control and Optimization | 1997

On the Puiseux Series Expansion of the Limit Discount Equation of Stochastic Games

Witold W. Szczechla; S. A. Connell; Jerzy A. Filar; O. J. Vrieze

In this paper we give a new proof of the existence of Puiseux series expansion of the limit discount equation of finite state stochastic games. Unlike the original proof, due to Bewley and Kohlberg [ Math. Oper. Res., 3 (1976), pp. 197--208], our proof is not algebraic and does not invoke Tarskis principle. Instead we use only the theory of functions of complex variables and complex analytic varieties.


Journal of Optimization Theory and Applications | 2000

Almost stationary e-equilibria in zero-sum stochastic games

János Flesch; Frank Thuijsman; O. J. Vrieze

We show the existence of almost stationary ∈-equilibria, for all ∈ > 0, in zero-sum stochastic games with finite state and action spaces. These are ∈-equilibria with the property that, if neither player deviates, then stationary strategies are played forever with probability almost 1. The proof is based on the construction of specific stationary strategy pairs, with corresponding rewards equal to the value, which can be supplemented with history-dependent δ-optimal strategies, with small δ > 0, in order to obtain almost stationary ∈-equilibria.


Mathematical Methods of Operations Research | 2003

Stochastic games with non-observable actions

János Flesch; Frank Thuijsman; O. J. Vrieze

Abstract.We examine n-player stochastic games. These are dynamic games where a play evolves in stages along a finite set of states; at each stage players independently have to choose actions in the present state and these choices determine a stage payoff to each player as well as a transition to a new state where actions have to be chosen at the next stage. For each player the infinite sequence of his stage payoffs is evaluated by taking the limiting average. Normally stochastic games are examined under the condition of full monitoring, i.e. at any stage each player observes the present state and the actions chosen by all players. This paper is a first attempt towards understanding under what circumstances equilibria could exist in n-player stochastic games without full monitoring. We demonstrate the non-existence of ɛ-equilibria in n-player stochastic games, with respect to the average reward, when at each stage each player is able to observe the present state, his own action, his own payoff, and the payoffs of the other players, but is unable to observe the actions of them. For this purpose, we present and examine a counterexample with 3 players. If we further drop the assumption that the players can observe the payoffs of the others, then counterexamples already exist in games with only 2 players.


Mathematical Methods of Operations Research | 2002

Optimality in different strategy classes in zero-sum stochastic games

János Flesch; Frank Thuijsman; O. J. Vrieze

Abstract. We present a complete picture of the relationship between the existence of 0-optimal strategies and ε-optimal strategies, ε>0, in the classes of stationary, Markov and history dependent strategies.

Collaboration


Dive into the O. J. Vrieze's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. A. Connell

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

T. E. S. Raghavan

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge