Claudia Diaz
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Claudia Diaz.
privacy enhancing technologies | 2002
Claudia Diaz; Stefaan Seys; Joris Claessens; Bart Preneel
This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the different users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing different systems amongst each other.
computer and communications security | 2013
Gunes Acar; Marc Juarez; Nick Nikiforakis; Claudia Diaz; Seda F. Gürses; Frank Piessens; Bart Preneel
In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications. In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.
Working Conference on Privacy and Anonymity in Networked and Distributed Systems | 2004
Claudia Diaz; Bart Preneel
This paper presents an analysis of mixes and dummy traffic policies, which are building blocks of anonymous services. The goal of the paper is to bring together all the issues related to the analysis and design of mix networks. We discuss continuous and pool mixes, topologies for mix networks and dummy traffic policies. We point out the advantages and disadvantages of design decisions for mixes and dummy policies. Finally, we provide a list of research problems that need further work.
workshop on privacy in the electronic society | 2008
Benedikt Gierlichs; Carmela Troncoso; Claudia Diaz; Bart Preneel; Ingrid Verbauwhede
Recently, Edman et al. proposed the systems anonymity level [10], a combinatorial approach to measure the amount of additional information needed to reveal the communication pattern in a mix-based anonymous communication system as a whole. The metric is based on the number of possible bijective mappings between the inputs and the outputs of the mix. In this work we show that Edman et al.s approach fails to capture the anonymity loss caused by subjects sending or receiving more than one message. We generalize the systems anonymity level in scenarios where user relations can be modeled as yes/no relations to cases where subjects send and receive an arbitrary number of messages. Further, we describe an algorithm to compute the redefined metric.
european symposium on research in computer security | 2004
Claudia Diaz; Len Sassaman; Evelyne Dewitte
We evaluate the anonymity provided by two popular email mix implementations, Mixmaster and Reliable, and compare their effectiveness through the use of simulations which model the algorithms used by these mixing applications. Our simulations are based on actual traffic data obtained from a public anonymous remailer (mix node). We determine that assumptions made in previous literature about the distribution of mix input traffic are incorrect: in particular, the input traffic does not follow a Poisson distribution. We establish for the first time that a lower bound exists on the anonymity of Mixmaster, and discover that under certain circumstances the algorithm used by Reliable provides no anonymity. We find that the upper bound on anonymity provided by Mixmaster is slightly higher than that provided by Reliable.
computer and communications security | 2014
Marc Juarez; Sadia Afroz; Gunes Acar; Claudia Diaz; Rachel Greenstadt
Recent studies on Website Fingerprinting (WF) claim to have found highly effective attacks on Tor. However, these studies make assumptions about user settings, adversary capabilities, and the nature of the Web that do not necessarily hold in practical scenarios. The following study critically evaluates these assumptions by conducting the attack where the assumptions do not hold. We show that certain variables, for example, users browsing habits, differences in location and version of Tor Browser Bundle, that are usually omitted from the current WF model have a significant impact on the efficacy of the attack. We also empirically show how prior work succumbs to the base rate fallacy in the open-world scenario. We address this problem by augmenting our classification method with a verification step. We conclude that even though this approach reduces the number of false positives over 63\%, it does not completely solve the problem, which remains an open issue for WF attacks.
ieee symposium on security and privacy | 2013
Seda Gürses; Claudia Diaz
Privacy is one of the friction points that emerge when communications are mediated in online social networks (OSNs). Different communities of computer science researchers have framed the OSN privacy problem as one of surveillance, institutional privacy, or social privacy. In tackling these problems, researchers have also treated them as if they were independent. In this article, the authors argue that the different privacy problems are entangled and that OSN privacy research would benefit from a more holistic approach.
Internet Research | 2003
Joris Claessens; Claudia Diaz; Caroline Goemans; Bart Preneel; Joos Vandewalle; Jos Dumortier
With the worldwide growth of open telecommunication networks and in particular the Internet, the privacy and security concerns of people using these networks have increased. On the one hand, users are concerned about their privacy, and desire to anonymously access the network. On the other hand, some organizations are concerned about how this anonymous access might be abused. This paper intends to bridge these conflicting interests, and proposes a solution for revocable anonymous access to the Internet. Moreover, the paper presents some legal background and motivation for such a solution. However, the paper also indicates some difficulties and disadvantages of the proposed solution, and suggests the need for further debate on the issue of online anonymity.
International Journal of Information Security | 2013
David Rebollo-Monedero; Javier Parra-Arnau; Claudia Diaz; Jordi Forné
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy-enhancing technologies. Most of these metrics are specific to concrete systems and adversarial models and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist system designers in selecting the most appropriate metric for a given application. In this work, we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability, and Bayes decision.
Security, Privacy, and Trust in Modern Data Management | 2007
Claudia Diaz; Bart Preneel
In this chapter we motivate the need for anonymity at the communication layer and describe the potential risks of having traceable communications. We then introduce the legal requirements on data retention and motivate the need for revocability of anonymity upon the request of law enforcement.