Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitri Scheftelowitsch is active.

Publication


Featured researches published by Dimitri Scheftelowitsch.


European Workshop on Performance Engineering | 2017

Analysis of Markov Decision Processes Under Parameter Uncertainty

Peter Buchholz; Iryna Dohndorf; Dimitri Scheftelowitsch

Markov Decision Processes (MDPs) are a popular decision model for stochastic systems. Introducing uncertainty in the transition probability distribution by giving upper and lower bounds for the transition probabilities yields the model of Bounded Parameter MDPs (BMDPs) which captures many practical situations with limited knowledge about a system or its environment. In this paper the class of BMDPs is extended to Bounded Parameter Semi Markov Decision Processes (BSMDPs). The main focus of the paper is on the introduction and numerical comparison of different algorithms to compute optimal policies for BMDPs and BSMDPs; specifically, we introduce and compare variants of value and policy iteration.


performance evaluation methodolgies and tools | 2017

Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters

Dimitri Scheftelowitsch; Peter Buchholz; Vahid Hashemi; Holger Hermanns

Markov decision processes (MDPs) are a popular model for performance analysis and optimization of stochastic systems. The parameters of stochastic behavior of MDPs are estimates from empirical observations of a system; their values are not known precisely. Different types of MDPs with uncertain, imprecise or bounded transition rates or probabilities and rewards exist in the literature. Commonly analysis of models with uncertainties amounts to searching for the most robust policy which means that the goal is to generate a policy with the greatest lower bound on performance (or, symmetrically the lowest upper bound on costs). However, hedging against an unlikely worst case may lead to losses in other situations. In general, one is interested in policies that behave well in all situations which results in a multi-objective view on decision making. In this paper, we consider policies for the expected discounted reward measure of MDPs with uncertain parameters. In particular, the approach is defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best and average case performances of a policy are analyzed simultaneously, which yields a multi-scenario multi-objective optimization problem. The paper presents and evaluates approaches to compute the pure Pareto optimal policies in the value vector space.


Computers & Operations Research | 2017

Optimal decisions for continuous time Markov decision processes over finite planning horizons

Peter Buchholz; Iryna Dohndorf; Dimitri Scheftelowitsch

The computation of ź-optimal policies for continuous time Markov decision processes (CTMDPs) over finite time intervals is a sophisticated problem because the optimal policy may change at arbitrary times. Numerical algorithms based on time discretization or uniformization have been proposed for the computation of optimal policies. The uniformization based algorithm has shown to be more reliable and often also more efficient but is currently only available for processes where the gain or reward does not depend on the decision taken in a state. In this paper, we present two new uniformization based algorithms for computing ź-optimal policies for CTMDPs with decision dependent rewards over a finite time horizon. Due to a new and tighter upper bound the newly proposed algorithms cannot only be applied for decision dependent rewards, they also outperform the available approach for rewards that do not depend on the decision. In particular for models where the policy only rarely changes, optimal policies can be computed much faster. HighlightsA new algorithm to compute accumulated rewards for Continuous Time Markov Decision Processes with action dependent rewards over finite horizons.A proof that the algorithm guarantees a global error in O(ź) for time step ź.Experimental comparision of available algorithms to analyze accumulated rewards for Continuous Time Markov Decision Processes with action dependent rewards over finite horizons.


performance evaluation methodolgies and tools | 2016

Equivalence and Minimization for Model Checking Labeled Markov Chains

Peter Buchholz; Jan Kriege; Dimitri Scheftelowitsch

Model checking of Markov chains using logics like CSL or asCSL proves whether a logical formula holds for a state of the Markov chain. It has been developed in the last decade to a widely used approach to express performance and dependability quantities for models from a wide range of application areas. In this paper, model checking is extended to prove formulas for distributions rather than single states. This is a very natural way to express certain performance or dependability measures that depend on the state of the system rather than on a specific state in the state space of the Markov chain. It is shown that the mentioned logics can be easily extended from states to distributions and model checking algorithms can also be easily adopted. Furthermore, new equivalences will be introduced that are weaker than bisimulation but still characterize the extended logics.


Lecture Notes in Computer Science | 2015

Markov Decision Petri Nets with Uncertainty

Marco Beccuti; Elvio Gilberto Amparore; Susanna Donatelli; Dimitri Scheftelowitsch; Peter Buchholz; Giuliana Franceschinis

Markov Decision Processes (MDPs) are a well known mathematical formalism that combines probabilities with decisions and allows one to compute optimal sequences of decisions, denoted as policies, for fairly large models in many situations. However, the practical application of MDPs is often faced with two problems: the specification of large models in an efficient and understandable way, which has to be combined with algorithms to generate the underlying MDP, and the inherent uncertainty on transition probabilities and rewards, of the resulting MDP. This paper introduces a new graphical formalism, called Markov Decision Petri Net with Uncertainty (MDPNU), that extends the Markov Decision Petri Net (MDPN) formalism, which has been introduced to define MDPs. MDPNUs allow one to specify MDPs where transition probabilities and rewards are defined by intervals rather than constant values. The resulting process is a Bounded Parameter MDP (BMDP). The paper shows how BMDPs are generated from MDPNUs, how analysis methods can be applied and which results can be derived from the models.


dependable systems and networks | 2014

Model Checking Stochastic Automata for Dependability and Performance Measures

Peter Buchholz; Jan Krige; Dimitri Scheftelowitsch

Model checking of Continuous Time Markov Chains (CTMCs) is a widely used approach in performance and dependability analysis and proves for which states of a CTMC a logical formula holds. This viewpoint might be too detailed in several practical situations, especially if the states of the CTMC do not correspond to physical states of the system since they are introduced for example to model non-exponential timing. The paper presents a general class of automata with stochastic timing realized by clocks. A state of an automaton is given by a logical state and by clock states. Clocks trigger transitions and are modeled by phase type distributions or more general state based stochastic processes. The class of stochastic processes underlying these automata contains CTMCs but also goes beyond Markov processes. The logic CSL is extended for model checking automata with clocks. A formula is then proved for an automata state and for the clock states that depend on the past behavior of the automaton. Basic algorithms to prove CSL formulas for logical automata states with complete or partial knowledge of the clock states are introduced. In some cases formulas can be proved efficiently by decomposing the model with respect to concurrently running clocks which is a way to avoid state space explosion.


International Conference on Measurement, Modelling and Evaluation of Computing Systems | 2018

Time-Based Maintenance Models Under Uncertainty

Peter Buchholz; Iryna Dohndorf; Dimitri Scheftelowitsch

Model based computation of optimal maintenance strategies is one of the classical applications of Markov Decision Processes. Unfortunately, a Markov Decision Process often does not capture the behavior of a component or system of components correctly because the duration of different operational phases is not exponentially distributed and the status of component is often only partially observable during operational times. The paper presents a general model for components with partially observable states and non-exponential failure, maintenance and repair times which are modeled by phase type distributions. Optimal maintenance strategies are computed using Markov decision theory. However, since the internal state of a component is not completely known, only bounds for the parameters of a Markov decision process can be computed resulting in a bounded parameters Markov decision process. For this kind of process optimal strategies can be computed assuming best, worst or average case behavior.


International Conference on Measurement, Modelling and Evaluation of Computing Systems | 2018

Collider – Parallel Experiments in Silico

Dimitri Scheftelowitsch

Large-scale software experiments are a ubiquitous feature of research. For example, performance evaluation of algorithms implies testing said algorithms on a large number of test cases. We provide a software framework which helps performing experiments on large parameter spaces, benefits from multi-core architectures, and saves generated results in a machine-readable format for future post-processing.


European Workshop on Performance Engineering | 2017

Bounded Aggregation for Continuous Time Markov Decision Processes

Peter Buchholz; Iryna Dohndorf; Alexander Frank; Dimitri Scheftelowitsch

Markov decision processes suffer from two problems, namely the so-called state space explosion which may lead to long computation times and the memoryless property of states which limits the modeling power with respect to real systems. In this paper we combine existing state aggregation and optimization methods for a new aggregation based optimization method. More specifically, we compute reward bounds on an aggregated model by exchanging state space size with uncertainty. We propose an approach for continuous time Markov decision models with discounted or average reward measures.


computational intelligence and games | 2012

BeatTheBeat music-based procedural content generation in a mobile game

Annika Jordan; Dimitri Scheftelowitsch; Jan Lahni; Jannic Hartwecker; Matthias Kuchem; Mirko Walter-Huber; Nils Vortmeier; Tim Delbrügger; Ümit Güler; Igor Vatolkin; Mike Preuss

Collaboration


Dive into the Dimitri Scheftelowitsch's collaboration.

Top Co-Authors

Avatar

Peter Buchholz

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Iryna Dohndorf

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Annika Jordan

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Jan Kriege

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Jan Lahni

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Jannic Hartwecker

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Matthias Kuchem

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Mirko Walter-Huber

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Nils Vortmeier

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Tim Delbrügger

Technical University of Dortmund

View shared research outputs
Researchain Logo
Decentralizing Knowledge