Reza Shokri
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Reza Shokri.
privacy enhancing technologies | 2009
Julien Freudiger; Reza Shokri; Jean-Pierre Hubaux
In mobile wireless networks, third parties can track the location of mobile nodes by monitoring the pseudonyms used for identification. A frequently proposed solution to protect the location privacy of mobile nodes suggests changing pseudonyms in regions called mix zones. In this paper, we propose a novel metric based on the mobility profiles of mobile nodes in order to evaluate the mixing effectiveness of possible mix zone locations. Then, as the location privacy achieved with mix zones depends on their placement in the network, we analyze the optimal placement of mix zones with combinatorial optimization techniques. The proposed algorithm maximizes the achieved location privacy in the system and takes into account the cost induced by mix zones to mobile nodes. By means of simulations, we show that the placement recommended by our algorithm significantly reduces the tracking success of the adversary.
financial cryptography | 2011
Julien Freudiger; Reza Shokri; Jean-Pierre Hubaux
In modern mobile networks, users increasingly share their location with third-parties in return for location-based services. Previous works show that operators of location-based services may identify users based on the shared location information even if users make use of pseudonyms. In this paper, we push the understanding of the privacy risk further. We evaluate the ability of location-based services to identify users and their points of interests based on different sets of location information. We consider real life scenarios of users sharing location information with location-based services and quantify the privacy risk by experimenting with real-world mobility traces.
wireless network security | 2009
Reza Shokri; Marcin Poturalski; Gael Ravot; Panos Papadimitratos; Jean-Pierre Hubaux
Wireless networking relies on a fundamental building block, neighbor discovery (ND). The nature of wireless communications, however, makes attacks against ND easy: An adversary can simply replay or relay (wormhole) packets across the network and mislead disconnected nodes into believing that they communicate directly. Such attacks can compromise the overlying protocols and applications. Proposed methods in the literature seek to secure ND, allowing nodes to verify they are neighbors. However, they either rely on specialized hardware or infrastructure, or offer limited security. In this paper, we address these problems, designing a practical and secure neighbor verification protocol for constrained Wireless Sensor networks (WSNs). Our scheme relies on estimated distance between nodes and simple geometric tests, and it is fully distributed. We prove our protocol is secure against the classic 2-end wormhole attack. Moreover, we provide a proof-of-concept implementation with off-the-shelf WSN equipment: Cricket motes.
mobile adhoc and sensor systems | 2011
Reza Shokri; Panagiotis Papadimitratos; Georgios Theodorakopoulos; Jean-Pierre Hubaux
Location-aware smart phones support various location-based services (LBSs): users query the LBS server and learn on the fly about their surroundings. However, such queries give away private information, enabling the LBS to identify and track users. We address this problem by proposing the first, to the best of our knowledge, user-collaborative privacy preserving approach for LBSs. Our solution, MobiCrowd, is simple to implement, it does not require changing the LBS server architecture, and it does not assume third party privacy-protection servers; still, MobiCrowd significantly improves user location-privacy. The gain stems from the collaboration of MobiCrowd-ready mobile devices: they keep their context information in a buffer, until it expires, and they pass it to other users seeking such information. Essentially, the LBS does not need to be contacted unless all the collaborative peers in the vicinity lack the sought information. Hence, the user can remain hidden from the server, unless it absolutely needs to expose herself through a query. Our results show that MobiCrowd hides a high fraction of location-based queries, thus significantly enhancing user location-privacy. To study the effects of various parameters, such as the collaboration level and contact rate between mobile users, we develop an epidemic model. Our simulations with real mobility datasets corroborate our model-based findings. Finally, our implementation of MobiCrowd on Nokia platforms indicates that it is lightweight and the collaboration cost is negligible.
privacy enhancing technologies | 2015
Reza Shokri
Abstract Consider users who share their data (e.g., location) with an untrusted service provider to obtain a personalized (e.g., location-based) service. Data obfuscation is a prevalent user-centric approach to protecting users’ privacy in such systems: the untrusted entity only receives a noisy version of user’s data. Perturbing data before sharing it, however, comes at the price of the users’ utility (service quality) experience which is an inseparable design factor of obfuscation mechanisms. The entanglement of the utility loss and the privacy guarantee, in addition to the lack of a comprehensive notion of privacy, have led to the design of obfuscation mechanisms that are either suboptimal in terms of their utility loss, or ignore the user’s information leakage in the past, or are limited to very specific notions of privacy which e.g., do not protect against adaptive inference attacks or the adversary with arbitrary background knowledge. In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user’s privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm.
wireless network security | 2010
Maxim Raya; Reza Shokri; Jean-Pierre Hubaux
As privacy moves to the center of attention in networked systems, and the need for trust remains a necessity, an important question arises: How do we reconcile the two seemingly contradicting requirements? In this paper, we show that the notion of data-centric trust can considerably alleviate the tension, although at the cost of pooling contributions from several entities. Hence, assuming an environment of privacy-preserving entities, we provide and analyze a game-theoretic model of the trust-privacy tradeoff. The results prove that the use of incentives allows for building trust while keeping the privacy loss minimal. To illustrate our analysis, we describe how the trust-privacy tradeoff can be optimized for the revocation of misbehaving nodes in an ad hoc network.
workshop on privacy in the electronic society | 2014
George Theodorakopoulos; Reza Shokri; Carmela Troncoso; Jean-Pierre Hubaux; Jean-Yves Le Boudec
Human mobility is highly predictable. Individuals tend to only visit a few locations with high frequency, and to move among them in a certain sequence reflecting their habits and daily routine. This predictability has to be taken into account in the design of location privacy preserving mechanisms (LPPMs) in order to effectively protect users when they expose their whereabouts to location-based services (LBSs) continuously. In this paper, we describe a method for creating LPPMs tailored to a users mobility profile taking into her account privacy and quality of service requirements. By construction, our LPPMs take into account the sequential correlation across the users exposed locations, providing the maximum possible trajectory privacy, i.e., privacy for the users past, present location, and expected future locations. Moreover, our LPPMs are optimal against a strategic adversary, i.e., an attacker that implements the strongest inference attack knowing both the LPPM operation and the users mobility profile. The optimality of the LPPMs in the context of trajectory privacy is a novel contribution, and it is achieved by formulating the LPPM design problem as a Bayesian Stackelberg game between the user and the adversary. An additional benefit of our formal approach is that the design parameters of the LPPM are chosen by the optimization algorithm.
IEEE Transactions on Mobile Computing | 2017
Alexandra-Mihaela Olteanu; Kévin Huguenin; Reza Shokri; Mathias Humbert; Jean-Pierre Hubaux
Co-location information about users is increasingly available online. For instance, mobile users more and more frequently report their co-locations with other users in the messages and in the pictures they post on social networking websites by tagging the names of the friends they are with. The users’ IP addresses also constitute a source of co-location information. Combined with (possibly obfuscated) location information, such co-locations can be used to improve the inference of the users’ locations, thus further threatening their location privacy: As co-location information is taken into account, not only a users reported locations and mobility patterns can be used to localize her, but also those of her friends (and the friends of their friends and so on). In this paper, we study this problem by quantifying the effect of co-location information on location privacy, considering an adversary such as a social network operator that has access to such information. We formalize the problem and derive an optimal inference algorithm that incorporates such co-location information, yet at the cost of high complexity. We propose some approximate inference algorithms, including a solution that relies on the belief propagation algorithm executed on a general Bayesian network model, and we extensively evaluate their performance. Our experimental results show that, even in the case where the adversary considers co-locations of the targeted user with a single friend, the median location privacy of the user is decreased by up to 62 percent in a typical setting. We also study the effect of the different parameters (e.g., the settings of the location-privacy protection mechanisms) in different scenarios.
very large data bases | 2017
Vincent Bindschaedler; Reza Shokri; Carl A. Gunter
Releasing full data records is one of the most challenging problems in data privacy. On the one hand, many of the popular techniques such as data de-identification are problematic because of their dependence on the background knowledge of adversaries. On the other hand, rigorous methods such as the exponential mechanism for differential privacy are often computationally impractical to use for releasing high dimensional data or cannot preserve high utility of original data due to their extensive data perturbation. This paper presents a criterion called plausible deniability that provides a formal privacy guarantee, notably for releasing sensitive datasets: an output record can be released only if a certain amount of input records are indistinguishable, up to a privacy parameter. This notion does not depend on the background knowledge of an adversary. Also, it can efficiently be checked by privacy tests. We present mechanisms to generate synthetic datasets with similar statistical properties to the input data and the same format. We study this technique both theoretically and experimentally. A key theoretical result shows that, with proper randomization, the plausible deniability mechanism generates differentially private synthetic data. We demonstrate the efficiency of this generative technique on a large dataset; it is shown to preserve the utility of original data with respect to various statistical analysis and machine learning measures.
network and distributed system security symposium | 2015
Igor Bilogrevic; Kévin Huguenin; Stefan Mihaila; Reza Shokri; Jean-Pierre Hubaux
Location check-ins contain both geographical and semantic information about the visited venues, in the form of tags (e.g., “restaurant”). Such data might reveal some personal information about users beyond what they actually want to disclose, hence their privacy is threatened. In this paper, we study users’ motivations behind location check-ins, and we quantify the effect of a privacy-preserving technique (i.e., generalization) on the perceived utility of check-ins. By means of a targeted user study on Foursquare (N = 77), we show that the motivation behind Foursquare check-ins is a mediator of the loss of utility caused by generalization. Using these findings, we propose a machine learning method for determining the motivation behind each check-in, and we design a motivation-based predictive model for utility. Our results show that the model accurately predicts the loss of utility caused by semantic and geographical generalization; this model enables the design of utility-aware, privacy-enhancing mechanisms in location-based social networks.