Łukasz Stettner
Polish Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Łukasz Stettner.
Applied Mathematics and Optimization | 1982
Łukasz Stettner
Three kinds of zero-sum Markov games with stopping and impulsive strategies are considered. For these games we find the saddle point strategies and prove that, the value of the game depends continuously on the initial state.
Systems & Control Letters | 2000
G. B. Di Masi; Łukasz Stettner
A control problem with risk sensitive ergodic performance criterion is considered for a discrete time Feller process. Using assumptions of uniform ergodicity and small risk factor, the existence and uniqueness of the solution to the Bellman equation is proved. Uniform approximations to such solution in terms of discounted cost and discounted game problems are also shown.
Applied Mathematics and Optimization | 1993
Łukasz Stettner
We control a discrete-time uniformly ergodic system, which depends on an unknown parameter α0 εA, a compact set. Our purpose is to minimize the long-run average-cost functional. We estimate the unknown parameter using the biased maximum likelihood estimator and apply the control which is almost optimal for the value of estimation. This way we construct strategies such that the value of the cost functional can be arbitrarily close to the optimal value obtained for α0.
Mathematics of Control, Signals, and Systems | 2005
G. B. Di Masi; Łukasz Stettner
In this paper we study ergodic properties of hidden Markov models with a generalized observation structure. In particular sufficient conditions for the existence of a unique invariant measure for the pair filter-observation are given. Furthermore, necessary and sufficient conditions for the existence of a unique invariant measure of the triple state-observation-filter are provided in terms of asymptotic stability in probability of incorrectly initialized filters. We also study the asymptotic properties of the filter and of the state estimator based on the observations as well as on the knowledge of the initial state. Their connection with minimal and maximal invariant measures is also studied.
Applied Mathematics and Optimization | 1989
Tomasz R. Bielecki; Łukasz Stettner
In this paper ergodic control problems (optimal stopping, impulsive control, and stochastic control for singularly perturbed Feller Markov processes) are studied. As the main result the so-called limit control principle is shown to hold in each case. The results obtained depend on the averaging properties of the perturbed system which follow from the fact that the perturbing process does not depend on either the perturbed process or on the control.
Stochastics An International Journal of Probability and Stochastic Processes | 1986
Łukasz Stettner
The impulsive control with long run average cost for Feller-Markov processes with quasicompact semigroups is studied.
Siam Journal on Control and Optimization | 2010
Jan Palczewski; Łukasz Stettner
We study finite horizon optimal stopping problems for continuous-time Feller-Markov processes. The functional depends on time, state, and external parameters and may exhibit discontinuities with respect to the time variable. Both left- and right-hand discontinuities are considered. We investigate the dependence of the value function on the parameters, on the initial state of the process, and on the stopping horizon. We construct
Systems & Control Letters | 1998
G. B. Di Masi; Łukasz Stettner
\varepsilon
Mathematical Methods of Operations Research | 2016
Marcin Pitera; Łukasz Stettner
-optimal stopping times and provide conditions under which an optimal stopping time exists. We demonstrate how to approximate this optimal stopping time by solutions to discrete-time problems. Our results are applied to the study of impulse control problems with finite time horizon, decision lag, and execution delay.
Systems & Control Letters | 2008
G. B. Di Masi; Łukasz Stettner
A simple adaptive control strategy for discrete-time Markov processes with compact state, action and parameter spaces that guarantees near self-optimality is proposed. The approach used is based on randomization and the study of invariant measure of the joint state and Bayesian parameter estimator process.