Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emmanuelle Anceaume is active.

Publication


Featured researches published by Emmanuelle Anceaume.


international symposium on object component service oriented real time distributed computing | 2000

Deadline-constrained causal order

Luís E. T. Rodrigues; Roberto Baldoni; Emmanuelle Anceaume; Michel Raynal

A causal ordering protocol ensures that if two messages are causally related and have the same destination, they are delivered to the application in their sending order. Causal order strongly simplifies the development of distributed object oriented systems. To prevent causal order violation, either messages may be forced to wait for messages in their past, or late messages may have to be discarded. For a real time setting, the first approach is not suitable since when a message misses a deadline, all the messages that causally depend on it may also be forced to miss their deadlines. We propose a novel causal ordering abstraction that takes message deadlines into consideration. Two implementations are proposed in the context of multicast and broadcast communication that deliver as many messages as possible to the application. Examples of distributed soft real time applications that benefit from the use of a deadline-constrained causal ordering primitive are given.


IEEE Transactions on Parallel and Distributed Systems | 2014

A Distributed Information Divergence Estimation over Data Streams

Emmanuelle Anceaume; Yann Busnel

In this paper, we consider the setting of large scale distributed systems, in which each node needs to quickly process a huge amount of data received in the form of a stream that may have been tampered with by an adversary. In this situation, a fundamental problem is how to detect and quantify the amount of work performed by the adversary. To address this issue, we propose a novel algorithm AnKLe for estimating the Kullback-Leibler divergence of an observed stream compared with the expected one. AnKLe combines sampling techniques and information-theoretic methods. It is very efficient, both in terms of space and time complexities, and requires only a single pass over the data stream. We show that AnKLe is an (ε, δ)-approximation algorithm with a space complexity Õ(1/ε + 1/ε2) bits in most cases, and Õ(1/ε + (n-ε-1)/ε2) otherwise, where n is the number of distinct data items in a stream. Moreover, we propose a distributed version of AnKLe that requires at most O (rℓ (log n + 1)) bits of communication between the ℓ participating nodes, where r is number of rounds of the algorithm. Experimental results show that the estimation provided by AnKLe remains accurate even for different adversarial settings for which the quality of other methods dramatically decreases.


high performance computing and communications | 2014

A Secure Two-Phase Data Deduplication Scheme

Pierre Meye; Philippe Raipin; Frédéric Tronel; Emmanuelle Anceaume

Data grows at the impressive rate of 50% per year, and 75% of the digital world is a copy! Although keeping multiple copies of data is necessary to guarantee their availability and long term durability, in many situations the amount of data redundancy is immoderate. By keeping a single copy of repeated data, data deduplication is considered as one of the most promising solutions to reduce the storage costs, and improve users experience by saving network bandwidth and reducing backup time. However, this solution must now solve many security issues to be completely satisfying. In this paper we target the attacks from malicious clients that are based on the manipulation of data identifiers and those based on backup time and network traffic observation. We present a deduplication scheme mixing an intraand an inter-user deduplication in order to build a storage system that is secure against the aforementioned type of attacks by controlling the correspondence between files and their identifiers, and making the inter-user deduplication unnoticeable to clients using deduplication proxies. Our method provides global storage space savings, per-client bandwidth network savings between clients and deduplication proxies, and global network bandwidth savings between deduplication proxies and the storage server. The evaluation of our solution compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure confidentiality.


dependable systems and networks | 2014

Anomaly Characterization in Large Scale Networks

Emmanuelle Anceaume; Yann Busnel; Erwan Le Merrer; Romaric Ludinard; Jean Louis Marchand; Bruno Sericola

The context of this work is the online characterization of errors in large scale systems. In particular, we address the following question: Given two successive configurations of the system, can we distinguish massive errors from isolated ones, the former ones impacting a large number of nodes while the second ones affect solely a small number of them, or even a single one? The rationale of this question is twofold. First, from a theoretical point of view, we characterize errors with respect to their neighbourhood, and we show that there are error scenarios for which isolated and massive errors are indistinguishable from an omniscient observer point of view. We then relax the definition of this problem by introducing unresolved configurations, and exhibit necessary and sufficient conditions that allow any node to determine the type of errors it has been impacted by. These conditions only depend on the close neighbourhood of each node and thus are locally computable. We present algorithms that implement these conditions, and show through extensive simulations, their performances. Now from a practical point of view, distinguishing isolated errors from massive ones is of utmost importance for networks providers. For instance, for Internet service providers that operate millions of home gateways, it would be very interesting to have procedures that allow gateways to self distinguish whether their dysfunction is caused by network-level errors or by their own hardware or software, and to notify the service provider only in the latter case.


Transactions on Large-Scale Data- and Knowledge-Centered Systems (TLDKS) | 2013

On the Power of the Adversary to Solve the Node Sampling Problem

Emmanuelle Anceaume; Yann Busnel; Sébastien Gambs

We study the problem of achieving uniform and fresh peer sampling in large scale dynamic systems under adversarial behaviors. Briefly, uniform and fresh peer sampling guarantees that any node in the system is equally likely to appear as a sample at any non malicious node in the system and that infinitely often any node has a non-null probability to appear as a sample of honest nodes. This sample is built locally out of a stream of node identifiers received at each node. An important issue that seriously hampers the feasibility of node sampling in open and large scale systems is the unavoidable presence of malicious nodes. The objective of malicious nodes mainly consists in continuously and largely biasing the input data stream out of which samples are obtained, to prevent (honest) nodes from being selected as samples. First we demonstrate that restricting the number of requests that malicious nodes can issue and providing a full knowledge of the composition of the system is a necessary and sufficient condition to guarantee uniform and fresh sampling. We also define and study two types of adversary models: an omniscient adversary that has the capacity to eavesdrop on all the messages that are exchanged within the system, and a blind adversary that can only observe messages that have been sent or received by nodes it controls. The former model allows us to derive lower bounds on the impact that the adversary has on the sampling functionality while the latter one corresponds to a more realistic setting. Given any sampling strategy, we quantify the minimum effort exerted by both types of adversary on any input stream to prevent this sampling strategy from outputting a uniform and fresh sample.


acm symposium on parallel algorithms and architectures | 2002

Tracking immediate predecessors in distributed computations

Emmanuelle Anceaume; Jean-Michel Hélary; Michel Raynal

A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than


International Journal of Foundations of Computer Science | 2011

DEPENDABILITY EVALUATION OF CLUSTER-BASED DISTRIBUTED SYSTEMS

Emmanuelle Anceaume; Francisco Vilar Brasileiro; Romaric Ludinard; Bruno Sericola; Frédéric Tronel

n


International Journal of Foundations of Computer Science | 2002

A NOTE ON THE DETERMINATION OF THE IMMEDIATE PREDECESSORS IN A DISTRIBUTED COMPUTATION

Emmanuelle Anceaume; Jean-Michel Hélary; Michel Raynal

(the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited.


dependable systems and networks | 2013

Uniform node sampling service robust against collusions of malicious nodes

Emmanuelle Anceaume; Yann Busnel; Bruno Sericola

Awerbuch and Scheideler have shown that peer-to-peer overlay networks can survive Byzantine attacks only if malicious nodes are not able to predict what will be the topology of the network for a given sequence of join and leave operations. In this paper we investigate adversarial strategies by following specific protocols. Our analysis demonstrates first that an adversary can very quickly subvert overlays based on distributed hash tables by simply never triggering leave operations. We then show that when all nodes (honest and malicious ones) are imposed on a limited lifetime, the system eventually reaches a stationary regime where the ratio of polluted clusters is bounded, independently from the initial amount of corruption in the system.


network computing and applications | 2015

A Message-Passing and Adaptive Implementation of the Randomized Test-and-Set Object

Emmanuelle Anceaume; François Castella; Achour Mostefaoui; Bruno Sericola

A distributed computation can be modeled as a partially ordered set (poset) of relevant events (the relevant events are the subset of the primitive events that are meaningful for an observer). This short note presents a general protocol that, when superimposed on a distributed computation, provides each relevant event with a timestamp that identifies exactly its immediate predecessors in the poset. This determination is done on the fly and without using additional control messages. So, the proposed protocol provides an on the fly computation of the Hasse diagram (transitive reduction) of the event graph produced by a distributed computation. The protocol is particularly simple and efficient. It is based on a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). Interestingly, when one is not interested in tracking the immediate predecessors, the proposed protocol can be simplified to get an efficient vector clock protocol that does not require particular channel assumptions.

Collaboration


Dive into the Emmanuelle Anceaume's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Romaric Ludinard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michel Raynal

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge