Miguel E. Andrés
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel E. Andrés.
haifa verification conference | 2009
Miguel E. Andrés; Pedro R. D'Argenio; Peter van Rossum
This paper presents a novel technique for counterexample generation in probabilistic model checking of Markov chains and Markov Decision Processes. (Finite) paths in counterexamples are grouped together in witnesses that are likely to provide similar debugging information to the user. We list five properties that witnesses should satisfy in order to be useful as debugging aid: similarity, accuracy, originality, significance, and finiteness. Our witnesses contain paths that behave similarly outside strongly connected components. Then, we show how to compute these witnesses by reducing the problem of generating counterexamples for general properties over Markov Decision Processes, in several steps, to the easy problem of generating counterexamples for reachability properties over acyclic Markov chains.
tools and algorithms for construction and analysis of systems | 2010
Miguel E. Andrés; Catuscia Palamidessi; Peter van Rossum; Geoffrey Smith
We address the problem of computing the information leakage of a system in an efficient way. We propose two methods: one based on reducing the problem to reachability, and the other based on techniques from quantitative counterexample generation. The second approach can be used either for exact or approximate computation, and provides feedback for debugging. These methods can be applied also in the case in which the input distribution is unknown. We then consider the interactive case and we point out that the definition of associated channel proposed in literature is not sound. We show however that the leakage can still be defined consistently, and that our methods extend smoothly.
Theoretical Computer Science | 2011
Miguel E. Andrés; Catuscia Palamidessi; Peter van Rossum; Ana Sokolova
Information hiding is a general concept which refers to the goal of preventing an adversary to infer secret information from the observables. Anonymity and Information Flow are examples of this notion. We study the problem of information hiding in systems characterized by the presence of randomization and concurrency. It is well known that the raising of nondeterminism, due to the possible interleavings and interactions of the parallel components, can cause unintended information leaks. One way to solve this problem is to fix the strategy of the scheduler beforehand. In this work, we propose a milder restriction on the schedulers, and we define the notion of strong (probabilistic) information hiding under various notions of observables. Furthermore, we propose a method, based on the notion of automorphism, to verify that a system satisfies the property of strong information hiding, namely strong anonymity or no-interference, depending on the context.
logic in computer science | 2010
Mário S. Alvim; Miguel E. Andrés; Catuscia Palamidessi
In recent years, there has been a growing interest in considering the probabilistic aspects of Information Flow. In this abstract we review some of the main approaches that have been considered to quantify the notion of information leakage, and we focus on some recent developments.
tools and algorithms for construction and analysis of systems | 2008
Miguel E. Andrés; Peter van Rossum
This paper introduces the logic cpCTL, which extends the probabilistic temporal logic pCTL with conditional probability, allowing one to express that the probability that φ is true given that ψ is true is at least a. We interpret cpCTL over Markov Chain and Markov Decision Processes. While model checking cpCTL overMarkov Chains can be done with existing techniques, those techniques do not carry over to Markov Decision Processes. We present a model checking algorithm for Markov Decision Processes. We also study the class of schedulers that suffice to find the maximum and minimum probability that φ is true given that ψ is true. Finally, we present the notion of counterexamples for cpCTL model checking and provide a method for counterexample generation.
international conference on concurrency theory | 2010
Mário S. Alvim; Miguel E. Andrés; Catuscia Palamidessi
We consider the problem of defining the information leakage in interactive systems where secrets and observables can alternate during the computation. We show that the information-theoretic approach which interprets such systems as (simple) noisy channels is not valid anymore. However, the principle can be recovered if we consider more complicated types of channels, that in Information Theory are known as channels with memory and feedback. We show that there is a complete correspondence between interactive systems and such kind of channels. Furthermore, we show that the capacity of the channels associated to such systems is a continuous function of the Kantorovich metric.
quantitative evaluation of systems | 2010
Miguel E. Andrés; Catuscia Palamidessi; Peter van Rossum; Ana Sokolova
Information hiding is a general concept which refers to the goal of preventing an adversary to infer secret information from the observables. Anonymity and Information Flow are examples of this notion. We study the problem of information hiding in systems characterized by the presence of randomization and concurrency. It is well known that the raising of nondeterminism, due to the possible interleavings and interactions of the parallel components, can cause unintended information leaks. One way to solve this problem is to fix the strategy of the scheduler beforehand. In this work, we propose a milder restriction on the schedulers, and we define the notion of strong (probabilistic) information hiding under various notions of observables. Furthermore, we propose a method, based on the notion of automorphism, to verify that a system satisfies the property of strong information hiding, namely strong anonymity or no-interference, depending on the context.
ifip international conference on theoretical computer science | 2010
Mário S. Alvim; Miguel E. Andrés; Catuscia Palamidessi; Peter van Rossum
In the field of Security, process equivalences have been used to characterize various information-hiding properties (for instance secrecy, anonymity and non-interference) based on the principle that a protocol P with a variable x satisfies such property if and only if, for every pair of secrets s 1 and s 2, \(P[^{s_1}/ _x]\) is equivalent to \(P[^{s_2}/ _x]\). We argue that, in the presence of nondeterminism, the above principle relies on the assumption that the scheduler “works for the benefit of the protocol”, and this is usually not a safe assumption. Non-safe equivalences, in this sense, include complete-trace equivalence and bisimulation. We present a formalism in which we can specify admissible schedulers and, correspondingly, safe versions of these equivalences. We prove that safe bisimulation is still a congruence. Finally, we show that safe equivalences can be used to establish information-hiding properties.
ifip international conference on theoretical computer science | 2010
Mário S. Alvim; Miguel E. Andrés; Catuscia Palamidessi
In recent years, there has been a growing interest in considering the quantitative aspects of Information Flow, partly because often the a priori knowledge of the secret information can be represented by a probability distribution, and partly because the mechanisms to protect the information may use randomization to obfuscate the relation between the secrets and the observables.
ARSPA-WITS'10 Proceedings of the 2010 joint conference on Automated reasoning for security protocol analysis and issues in the theory of security | 2010
Catuscia Palamidessi; Mário S. Alvim; Miguel E. Andrés
In recent years, there has been a growing interest in considering the quantitative aspects of Information Flow, partly because often the a priori knowledge of the secret information can be represented by a probability distribution, and partly because the mechanisms to protect the information may use randomization to obfuscate the relation between the secrets and the observables. We consider the problem of defining a measure of information leakage in interactive systems. We show that the information-theoretic approach which interprets such systems as (simple) noisy channels is not valid anymore when the secrets and the observables can alternate during the computation, and influence each other. However, the principle can be retrieved if we consider more complicated types of channels, that in Information Theory are known as channels with memory and feedback. We show that there is a complete correspondence between interactive systems and such kind of channels. Furthermore, the proposed framework has good topological properties which allow to reason compositionally about the worst-case leakage in these systems.