Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jérémie Decouchant is active.

Publication


Featured researches published by Jérémie Decouchant.


symposium on reliable distributed systems | 2014

AcTinG: Accurate Freerider Tracking in Gossip

Sonia Ben Mokhtar; Jérémie Decouchant; Vivien Quéma

Gossip-based content dissemination protocols are a scalable and cheap alternative to centralized content sharing systems. However, it is well known that these protocols suffer from rational nodes, i.e., nodes that aim at downloading the content without contributing their fair share to the system. While the problem of rational nodes that act individually has been well addressed in the literature, colluding rational nodes is still an open issue. Indeed, LiFTinG, the only existing gossip protocol addressing this issue, yields a high ratio of false positive accusations of correct nodes. In this paper, we propose AcTinG, a protocol that prevents rational collusions in gossip-based content dissemination protocols, while guaranteeing zero false positive accusations. We assess the performance of AcTinG on a testbed comprising 400 nodes running on 100 physical machines, and compare its behaviour in the presence of colluders against two state-of-the-art protocols: BAR Gossip that is the most robust protocol handling non-colluding rational nodes, and LiFTinG, the only existing gossip protocol that handles colluding nodes. The performance evaluation shows that AcTinG is able to deliver all messages despite the presence of colluders, whereas LiFTinG and BAR Gossip, both suffer heavy message losses. Finally, using simulations involving up to a million nodes, we show that AcTinG exhibits similar scalability properties as standard gossip-based dissemination protocols.


Proceedings of the 1st Workshop on System Software for Trusted Execution | 2016

Avoiding Leakage and Synchronization Attacks through Enclave-Side Preemption Control

Marcus Völp; Adam Lackorzynski; Jérémie Decouchant; Vincent Rahli; Francisco Rocha; Paulo Esteves-Verissimo

Intel SGX is the latest processor architecture promising secure code execution despite large, complex and hence potentially vulnerable legacy operating systems (OSs). However, two recent works identified vulnerabilities that allow an untrusted management OS to extract secret information from Intel SGXs enclaves, and to violate their integrity by exploiting concurrency bugs. In this work, we re-investigate delayed preemption (DP) in the context of Intel SGX. DP is a mechanism originally proposed for L4-family microkernels as disable-interrupt replacement. Recapitulating earlier results on language-based information-flow security, we illustrate the construction of leakage-free code for enclaves. However, as long as adversaries have fine-grained control over preemption timing, these solutions are impractical from a performance/complexity perspective. To overcome this, we resort to delayed preemption, and sketch a software implementation for hypervisors providing enclaves as well as a hardware extension for systems like SGX. Finally, we illustrate how static analyses for SGX may be extended to check confidentiality of preemption-delaying programs.


symposium on operating systems principles | 2017

Enclave-Based Privacy-Preserving Alignment of Raw Genomic Information: Information Leakage and Countermeasures

Marcus Völp; Jérémie Decouchant; Christoph Lambert; Maria Fernandes; Paulo Esteves-Verissimo

Recent breakthroughs in genomic sequencing led to an enormous increase of DNA sampling rates, which in turn favored the use of clouds to efficiently process huge amounts of genomic data. However, while allowing possible achievements in personalized medicine and related areas, cloud-based processing of genomic information also entails significant privacy risks, asking for increased protection. In this paper, we focus on the first, but also most data-intensive, processing step of the genomics information processing pipeline: the alignment of raw genomic data samples (called reads) to a synthetic human reference genome. Even though privacy-preserving alignment solutions (e.g., based on homomorphic encryption) have been proposed, their slow performance encourages alternatives based on trusted execution environments, such as Intel SGX, to speed up secure alignment. Such alternatives have to deal with data structures whose size by far exceeds secure enclave memory, requiring the alignment code to reach out into untrusted memory. We highlight how sensitive genomic information can be leaked when those enclave-external alignment data structures are accessed, and suggest countermeasures to prevent privacy breaches. The overhead of these countermeasures indicate that the competitiveness of a privacy-preserving enclave-based alignment has yet to be precisely evaluated.


bioRxiv | 2018

Sensitivity Levels: Optimizing the Performance of Privacy Preserving DNA Alignment

Maria Fernandes; Jérémie Decouchant; Marcus Voelp; Francisco M. Couto; Paulo Esteves-Verissimo

The advent of high throughput next-generation sequencing (NGS) machines made DNA sequencing cheaper, but also put pressure on the genomic life-cycle, which includes aligning millions of short DNA sequences, called reads, to a reference genome. On the performance side, efficient algorithms have been developed, and parallelized on public clouds. On the privacy side, since genomic data are utterly sensitive, several cryptographic mechanisms have been proposed to align reads securely, with a lower performance than the former, which in turn are not secure. This manuscript proposes a novel contribution to improving the privacy performance product in current genomic studies. Building on recent works that argue that genomics data needs to be × treated according to a threat-risk analysis, we introduce a multi-level sensitivity classification of genomic variations. Our classification prevents the amplification of possible privacy attacks, thanks to promoting and partitioning mechanisms among sensitivity levels. Thanks to this classification, reads can be aligned, stored, and later accessed, using different security levels. We then extend a recent filter, which detects the reads that carry sensitive information, to classify reads into sensitivity levels. Finally, based on a review of the existing alignment methods, we show that adapting alignment algorithms to reads sensitivity allows high performance gains, whilst enforcing high privacy levels. Our results indicate that using sensitivity levels is feasible to optimize the performance of privacy preserving alignment, if one combines the advantages of private and public clouds.


Journal of Biomedical Informatics | 2018

Accurate filtering of privacy-sensitive information in raw genomic data

Jérémie Decouchant; Maria Fernandes; Marcus Völp; Francisco M. Couto; Paulo Esteves-Verissimo

Sequencing thousands of human genomes has enabled breakthroughs in many areas, among them precision medicine, the study of rare diseases, and forensics. However, mass collection of such sensitive data entails enormous risks if not protected to the highest standards. In this article, we follow the position and argue that post-alignment privacy is not enough and that data should be automatically protected as early as possible in the genomics workflow, ideally immediately after the data is produced. We show that a previous approach for filtering short reads cannot extend to long reads and present a novel filtering approach that classifies raw genomic data (i.e., whose location and content is not yet determined) into privacy-sensitive (i.e., more affected by a successful privacy attack) and non-privacy-sensitive information. Such a classification allows the fine-grained and automated adjustment of protective measures to mitigate the possible consequences of exposure, in particular when relying on public clouds. We present the first filter that can be indistinctly applied to reads of any length, i.e., making it usable with any recent or future sequencing technologies. The filter is accurate, in the sense that it detects all known sensitive nucleotides except those located in highly variable regions (less than 10 nucleotides remain undetected per genome instead of 100,000 in previous works). It has far less false positives than previously known methods (10% instead of 60%) and can detect sensitive nucleotides despite sequencing errors (86% detected instead of 56% with 2% of mutations). Finally, practical experiments demonstrate high performance, both in terms of throughput and memory consumption.


pacific rim international symposium on dependable computing | 2017

Meeting the Challenges of Critical and Extreme Dependability and Security

Paulo Esteves-Verissimo; Marcus Völp; Jérémie Decouchant; Vincent Rahli; Francisco Rocha

The world is becoming an immense critical information infrastructure, with the fast and increasing entanglement of utilities, telecommunications, Internet, cloud, and the emerging IoT tissue. This may create enormous opportunities, but also brings about similarly extreme security and dependability risks. We predict an increase in very sophisticated targeted attacks, or advanced persistent threats (APT), and claim that this calls for expanding the frontier of security and dependability methods and techniques used in our current CII. Extreme threats require extreme defenses: we propose resilience as a unifying paradigm to endow systems with the capability of dynamically and automatically handling extreme adversary power, and sustaining perpetual and unattended operation. In this position paper, we present this vision and describe our methodology, as well as the assurance arguments we make for the ultra-resilient components and protocols they enable, illustrated with case studies in progress.


international workshop on security | 2017

Permanent Reencryption: How to Survive Generations of Cryptanalysts to Come.

Marcus Völp; Francisco Rocha; Jérémie Decouchant; Jiangshan Yu; Paulo Veríssimo

The protection of long-lived sensitive information puts enormous stress on traditional ciphers, to survive generations of cryptanalysts. In addition, there is a continued risk of adversaries penetrating and attacking the systems in which these ciphers are implemented. In this paper, we present our work-in-progress on an approach to survive both cryptanalysis and intrusion attacks for extended periods of time. A prime objective of any similar work is to prevent the leakage of plaintexts. However, given the long lifespan of sensitive information, during which cryptanalysts could focus on breaking the cipher, it is equally important to prevent leakage of unduly high amounts of ciphertext. Our approach consists in an enclave-based architectural set-up bringing in primary resilience against attacks, seconded by permanently reencrypting portions of the confidential or privacy-sensitive data with fresh keys and combining ciphers in a threshold-based encryption scheme.


11th International Conference on Practical Applications of Computational Biology & Bioinformatics, 2017, ISBN 978-3-319-60815-0, págs. 220-227 | 2017

Cloud-Assisted Read Alignment and Privacy

Maria Fernandes; Jérémie Decouchant; Francisco M. Couto; Paulo Veríssimo

Thanks to the rapid advances in sequencing technologies, genomic data is now being produced at an unprecedented rate. To adapt to this growth, several algorithms and paradigm shifts have been proposed to increase the throughput of the classical DNA workflow, e.g. by relying on the cloud to perform CPU intensive operations. However, the scientific community raised an alarm due to the possible privacy-related attacks that can be executed on genomic data. In this paper we review the state of the art in cloud-based alignment algorithms that have been developed for performance. We then present several privacy-preserving mechanisms that have been, or could be, used to align reads at an incremental performance cost. We finally argue for the use of risk analysis throughout the DNA workflow, to strike a balance between performance and protection of data.


usenix annual technical conference | 2014

Large pages may be harmful on NUMA systems

Fabien Gaud; Baptiste Lepers; Jérémie Decouchant; Justin R. Funston; Alexandra Fedorova; Vivien Quéma


international conference on distributed computing systems | 2016

PAG: Private and Accountable Gossip

Jérémie Decouchant; Sonia Ben Mokhtar; Albin Petit; Vivien Quéma

Collaboration


Dive into the Jérémie Decouchant's collaboration.

Top Co-Authors

Avatar

Marcus Völp

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vivien Quéma

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Kozhaya

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Jiangshan Yu

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Vincent Rahli

University of Luxembourg

View shared research outputs
Researchain Logo
Decentralizing Knowledge