Luca de Alfaro
University of California, Santa Cruz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luca de Alfaro.
foundations of software engineering | 2001
Luca de Alfaro; Thomas A. Henzinger
Conventional type systems specify interfaces in terms of values and domains. We present a light-weight formalism that captures the temporal aspects of software component interfaces. Specifically, we use an automata-based language to capture both input assumptions about the order in which the methods of a component are called, and output guarantees about the order in which the component calls external methods. The formalism supports automatic compatability checks between interface models, and thus constitutes a type system for component interaction. Unlike traditional uses of automata, our formalism is based on an optimistic approach to composition, and on an alternating approach to design refinement. According to the optimistic approach, two components are compatible if there is some environment that can make them work together. According to the alternating approach, one interface refines another if it has weaker input assumptions, and stronger output guarantees. We show that these notions have game-theoretic foundations that lead to efficient algorithms for checking compatibility and refinement.
foundations of software technology and theoretical computer science | 1995
Andrea Bianco; Luca de Alfaro
The temporal logics pCTL and pCTL* have been proposed as tools for the formal specification and verification of probabilistic systems: as they can express quantitative bounds on the probability of system evolutions, they can be used to specify system properties such as reliability and performance. In this paper, we present model-checking algorithms for extensions of pCTL and pCTL* to systems in which the probabilistic behavior coexists with nondeterminism, and show that these algorithms have polynomial-time complexity in the size of the system. This provides a practical tool for reasoning on the reliability and performance of parallel systems.
international world wide web conferences | 2007
B. Thomas Adler; Luca de Alfaro
We present a content-driven reputation system for Wikipedia authors. In our system, authors gain reputation when the edits they perform to Wikipedia articles are preserved by subsequent authors, and they lose reputation when their edits are rolled back or undone in short order. Thus, author reputation is computed solely on the basis of content evolution; user-to-user comments or ratings are not used. The author reputation we compute could be used to flag new contributions from low-reputation authors, or it could be used to allow only authors with high reputation to contribute to controversialor critical pages. A reputation system for the Wikipedia could also provide an incentive for high-quality contributions. We have implemented the proposed system, and we have used it to analyze the entire Italian and French Wikipedias, consisting of a total of 691, 551 pages and 5, 587, 523 revisions. Our results show that our notion of reputation has good predictive value: changes performed by low-reputation authors have a significantly larger than average probability of having poor quality, as judged by human observers, and of being later undone, as measured by our algorithms.
embedded software | 2001
Luca de Alfaro; Thomas A. Henzinger
We classify component-based models of computation into component models and interface models. A component model specifies for each component howthe component behaves in an arbitrary environment; an interface model specifies for each component what the component expects from the environment. Component models support compositional abstraction, and therefore component-based verification. Interface models support compositional refinement, and therefore componentbased design. Many aspects of interface models, such as compatibility and refinement checking between interfaces, are properly viewed in a gametheoretic setting, where the input and output values of an interface are chosen by different players.
Archive | 2005
Luca de Alfaro; Thomas A. Henzinger
Surveying results from [5] and [6], we motivate and introduce the theory behind formalizing rich interfaces for software and hardware components. Rich interfaces specify the protocol aspects of component interaction. Their formalization, called interface automata, permits a compiler to check the compatibility of component interaction protocols. Interface automata support incremental design and independent implementability. Incremental design means that the compatibility checking of interfaces can proceed for partial system descriptions, without knowing the interfaces of all components. Independent implementability means that compatible interfaces can be refined separately, while still maintaining compatibility.
international colloquium on automata languages and programming | 2003
Luca de Alfaro; Thomas A. Henzinger; Rupak Majumdar
Discounting the future means that the value, today, of a unit payoffis 1 if the payoff occurs today, a if it occurs tomorrow, a2 if it occurs the day after tomorrow, and so on, for some real-valued discount factor 0 ≤ a ≤ 1. Discounting (or inflation) is a key paradigm in economics and has been studied in Markov decision processes as well as game theory. We submit that discounting also has a natural place in systems engineering: for nonterminating systems, a potential bug in the faraway future is less troubling than a potential bug today. We therefore develop a systems theory with discounting. Our theory includes several basic elements: discounted versions of system properties that correspond to the ω-regular properties, fixpoint-based algorithms for checking discounted properties, and a quantitative notion of bisimilarity for capturing the difference between two states with respect to discounted properties. We present the theory in a general form that applies to probabilistic systems as well as multicomponent systems (games), but it readily specializes to classical transition systems. We show that discounting, besides its natural practical appeal, has also several mathematical benefits. First, the resulting theory is robust, in that small perturbations of a system can cause only small changes in the properties of the system. Second, the theory is computational, in that the values of discounted properties, as well as the discounted bisimilarity distance between states, can be computed to any desired degree of precision.
international conference on concurrency theory | 2003
Luca de Alfaro; M. Faella; Thomas A. Henzinger; Rupak Majumdar; Mariëlle Ida Antoinette Stoelinga
We consider concurrent two-person games played in real time, in which the players decide both which action to play, and when to play it. Such timed games differ from untimed games in two essential ways. First, players can take each other by surprise, because actions are played with delays that cannot be anticipated by the opponent. Second, a player should not be able to win the game by preventing time from diverging. We present a model of timed games that preserves the element of surprise and accounts for time divergence in a way that treats both players symmetrically and applies to all ω-regular winning conditions. We prove that the ability to take each other by surprise adds extra power to the players. For the case that the games are specified in the style of timed automata, we provide symbolic algorithms for their solution with respect to all ω-regular winning conditions. We also show that for these timed games, memory strategies are more powerful than memoryless strategies already in the case of reachability objectives.
Journal of Computer and System Sciences | 2004
Luca de Alfaro; Rupak Majumdar
We consider two-player games played for an infinite number of rounds, with ω-regular winning conditions. The games may be concurrent, in that the players choose their moves simultaneously and independently, and probabilistic, in that the moves determine a probability distribution for the successor state. We introduce quantitative game µ-calculus, and we show that the maximal probability of winning such games can be expressed as the fixpoint formulas in this calculus. We develop the arguments both for deterministic and for probabilistic concurrent games; as a special case, we solve probabilistic turn-based games with ω-regular winning conditions, which was also open. We also characterize the optimality, and the memory requirements, of the winning strategies. In particular, we show that while memoryless strategies suffice for winning games with safety and reachability conditions, Buchi conditions require the use of strategies with infinite memory. The existence of optimal strategies, as opposed to e-optimal, is only guaranteed in games with safety winning conditions.
international conference on computer aided design | 2002
Roberto Passerone; Luca de Alfaro; Thomas A. Henzinger; Alberto L. Sangiovanni-Vincentelli
An essential problem in component-based design is how to compose components designed in isolation. Several approaches have been proposed for specifying component interfaces that capture behavioral aspects such as interaction protocols, and for verifying interface compatibility. Likewise, several approaches have been developed for synthesizing converters between incompatible protocols. In this paper, we introduce the notion of adaptability as the property that two interfaces have when they can be made compatible by communicating through a converter that meets specified requirements. We show that verifying adaptability and synthesizing an appropriate converter are two faces of the same coin: adaptability can be formalized and solved using a game-theoretic framework, and then the converter can be synthesized as a strategy that always wins the game. Finally we show that this framework can be related to the rectification problem in trace theory.
international conference on concurrency theory | 1999
Luca de Alfaro
A Markov decision process is a generalization of a Markov chain in which both probabilistic and nondeterministic choice coexist. Given a Markov decision process with costs associated with the transitions and a set of target states, the stochastic shortest path problem consists in computing the minimum expected cost of a control strategy that guarantees to reach the target. In this paper, we consider the classes of stochastic shortest path problems in which the costs are all non-negative, or all non-positive. Previously, these two classes of problems could be solved only under the assumption that the policies that minimize or maximize the expected cost also lead to the target with probability 1. This assumption does not necessarily hold for Markov decision processes that arise as model for distributed probabilistic systems. We present efficient methods for solving these two classes of problems without relying on additional assumptions. The methods are based on algorithms to transform the original problems into problems that satisfy the required assumptions. The methods lead to the efficient solution of two basic problems in the analysis of the reliability and performance of partially-specified systems: the computation of the minimum (or maximum) probability of reaching a target set, and the computation of the minimum (or maximum) expected time to reach the set.