Sven Sandberg
Uppsala University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sven Sandberg.
automated technology for verification and analysis | 2006
Parosh Aziz Abdulla; Noomene Ben Henda; Richard Mayr; Sven Sandberg
We consider infinite-state discrete Markov chains which are eager: the probability of avoiding a defined set of final states for more than n steps is bounded by some exponentially decreasing function f(n). We prove that eager Markov chains include those induced by Probabilistic Lossy Channel Systems, Probabilistic Vector Addition Systems with States, and Noisy Turing Machines, and that the bounding function f(n) can be effectively constructed for them. Furthermore, we study the problem of computing the expected reward (or cost) of runs until reaching the final states, where rewards are assigned to individual runs by computable reward functions. For eager Markov chains, an effective path exploration scheme, based on forward reachability analysis, can be used to approximate the expected reward up-to an arbitrarily small error.
Model-Based Testing of Reactive Systems | 2005
Sven Sandberg
This chapter considers two fundamental problems for Mealy machines, i.e., finite-state machines with inputs and outputs. The machines will be used in subsequent chapters as models of a system or program to test. We repeat Definition 21.1 of Chapter 21 here: readers already familiar with Mealy machines can safely skip to Section 1.1.2.
symposium on theoretical aspects of computer science | 2003
Henrik Björklund; Sven Sandberg; Sergei Vorobyov
We suggest a new randomized algorithm for solving parity games with worst case time complexity roughly min(O(n3 ? (n/k+ 1)k), 2O(?n log n), where n is the number of vertices and k the number of colors of the game. This is comparable with the previously known algorithms when the number of colors is small. However, the subexponential bound is an advantage when the number of colors is large, k = ?(n1/2+?).
Theoretical Computer Science | 2004
Henrik Björklund; Sven Sandberg; Sergei Vorobyov
We give a simple, direct, and constructive proof of memoryless determinacy for parity and mean payoff games. First, we prove by induction that the finite duration versions of these games, played until some vertex is repeated, are determined and both players have memoryless winning strategies. In contrast to the proof of Ehrenfeucht and Mycielski, Internat. J. Game Theory, 8 (1979) 109-113, our proof does not refer to the infinite-duration versions. Second, we show that memoryless determinacy straightforwardly generalizes to infinite duration versions of parity and mean payoff games.
international andrei ershov memorial conference on perspectives of system informatics | 2003
Henrik Björklund; Sven Sandberg; Sergei Vorobyov
We present several new algorithms as well as new lower and upper bounds for optimizing functions underlying infinite games pertinent to computer-aided verification.
foundations of software science and computation structure | 2008
Parosh Aziz Abdulla; Noomene Ben Henda; Luca de Alfaro; Richard Mayr; Sven Sandberg
We consider turn-based stochastic games on infinite graphs induced by game probabilistic lossy channel systems (GPLCS), the game version of probabilistic lossy channel systems (PLCS). We study games with Buchi (repeated reachability) objectives and almost-sure winning conditions. These games are pure memoryless determined and, under the assumption that the target set is regular, a symbolic representation of the set of winning states for each player can be effectively constructed. Thus, turn-based stochastic games on GPLCS are decidable. This generalizes the decidability result for PLCS-induced Markov decision processes in [10].
quantitative evaluation of systems | 2006
Parosh Aziz Abdulla; Noomene Ben Henda; Richard Mayr; Sven Sandberg
We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with probability 1 from every other state, and the eagerness condition requires that the probability of avoiding the attractor in n or more steps after leaving it is exponentially bounded in n. Examples of such Markov chains are those induced by probabilistic lossy channel systems and similar systems. We show that the expected residence time (a generalization of the steady state distribution) exists for Markov chains with eager attractors and that it can be effectively approximated to arbitrary precision. Furthermore, arbitrarily close approximations of the limiting average expected reward, with respect to state-based bounded reward functions, are also computable
Logical Methods in Computer Science | 2015
Parosh Aziz Abdulla; Lorenzo Clemente; Richard Mayr; Sven Sandberg
We give an algorithm for solving stochastic parity games with almost-sure winning conditions on lossy channel systems, under the constraint that both players are restricted to finitememory strategi ...
Archive | 2004
Henrik Björklund; Sven Sandberg; Sergei Vorobyov
Lecture Notes in Computer Science | 2004
Henrik Björklund; Sven Sandberg; Sergei Vorobyov