Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier Parra-Arnau is active.

Publication


Featured researches published by Javier Parra-Arnau.


IEEE Transactions on Knowledge and Data Engineering | 2014

Privacy-Preserving Enhanced Collaborative Tagging

Javier Parra-Arnau; Andrea Perego; Elena Ferrari; Jordi Forné; David Rebollo-Monedero

Collaborative tagging is one of the most popular services available online, and it allows end user to loosely classify either online or offline resources based on their feedback, expressed in the form of free-text labels (i.e., tags). Although tags may not be per se sensitive information, the wide use of collaborative tagging services increases the risk of cross referencing, thereby seriously compromising user privacy. In this paper, we make a first contribution toward the development of a privacy-preserving collaborative tagging service, by showing how a specific privacy-enhancing technology, namely tag suppression, can be used to protect end-user privacy. Moreover, we analyze how our approach can affect the effectiveness of a policy-based collaborative tagging system that supports enhanced web access functionalities, like content filtering and discovery, based on preferences specified by end users.


International Journal of Information Security | 2013

On the measurement of privacy as an attacker's estimation error

David Rebollo-Monedero; Javier Parra-Arnau; Claudia Diaz; Jordi Forné

A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy-enhancing technologies. Most of these metrics are specific to concrete systems and adversarial models and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist system designers in selecting the most appropriate metric for a given application. In this work, we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability, and Bayes decision.


Computer Standards & Interfaces | 2015

On content-based recommendation and user privacy in social-tagging systems

Silvia Puglisi; Javier Parra-Arnau; Jordi Forné; David Rebollo-Monedero

Recommendation systems and content-filtering approaches based on annotations and ratings essentially rely on users expressing their preferences and interests through their actions, in order to provide personalised content. This activity, in which users engage collectively, has been named social tagging, and it is one of the most popular opportunities for users to engage online, and although it has opened new possibilities for application interoperability on the semantic web, it is also posing new privacy threats. In fact, it consists in describing online or offline resources by using free-text labels, i.e., tags, thereby exposing a users profile and activity to privacy attacks. As a result, users may wish to adopt a privacy-enhancing strategy in order not to reveal their interests completely. Tag forgery is a privacy-enhancing technology consisting in generating tags for categories or resources that do not reflect the users actual preferences too accurately. By modifying their profile, tag forgery may have a negative impact on the quality of the recommendation system, thus protecting user privacy to a certain extent but at the expenses of utility loss. The impact of tag forgery on content-based recommendation isconsequently investigated in a real-world application scenario where different forgery strategies are evaluated, and the resulting loss in utility is measured and compared. We investigate the effects of different privacy enhancing technologies in content-based recommendation systems.We study the interplay between the degree of privacy and the potential degradation of the quality of the recommendation.We evaluate three different tag forgery strategies: optimised tag forgery, uniform tag forgery and TrackMeNot.We carry out an experimental evaluation on a real dataset extracted from Delicious.


Information Sciences | 2013

A modification of the Lloyd algorithm for k-anonymous quantization

David Rebollo-Monedero; Jordi Forné; Esteve Pallarès; Javier Parra-Arnau

We address the problem of designing quantizers that cluster data while satisfying a k-anonymity requirement. A general data compression perspective is adopted, which considers both discrete and continuous probability distributions, and corresponding constraints on both cell sizes and quantizer index probabilities. Potential applications of this problem extend well beyond the important case of microdata anonymization, to include also optimized task allocation under workload constraints. Our contribution is twofold. First and most importantly, we present a theoretical analysis showing the optimality conditions which probability-constrained quantizers must satisfy, thereby theoretically characterizing optimal k-anonymous aggregation as a special case. As a second contribution, inspired by our theoretical analysis, we propose an alternating optimization algorithm for the design of this type of quantizers. Our algorithm is conceptually motivated by the popular Lloyd-Max algorithm for quantization design, originally intended for data compression, also known as the k-means method. Experimental results for synthetic and real data, with mean squared error as a distortion measure, confirm that our method outperforms MDAV, a popular fixed-size microaggregation algorithm for statistical disclosure control. This performance improvement is in terms of data utility, for the exact same k-anonymity constraint, but does come at the expense of higher computational sophistication.


data and knowledge engineering | 2012

Optimal tag suppression for privacy protection in the semantic Web

Javier Parra-Arnau; David Rebollo-Monedero; Jordi Forné; Jose L. Muñoz; Oscar Esparza

Leveraging on the principle of data minimization, we propose tag suppression, a privacy-enhancing technique for the semantic Web. In our approach, users tag resources on the Web revealing their personal preferences. However, in order to prevent privacy attackers from profiling users based on their interests, they may wish to refrain from tagging certain resources. Consequently, tag suppression protects user privacy to a certain extent, but at the cost of semantic loss incurred by suppressing tags. In a nutshell, our technique poses a trade-off between privacy and suppression. In this paper, we investigate this trade-off in a mathematically systematic fashion and provide an extensive theoretical analysis. We measure user privacy as the entropy of the users tag distribution after the suppression of some tags. Equipped with a quantitative measure of both privacy and utility, we find a close-form solution to the problem of optimal tag suppression. Experimental results on a real-world tagging application show how our approach may contribute to privacy protection.


Computer Standards & Interfaces | 2013

A collaborative protocol for anonymous reporting in vehicular ad hoc networks

Carolina Tripp Barba; Luis Urquiza Aguiar; Mónica Aguilar Igartua; Javier Parra-Arnau; David Rebollo-Monedero; Jordi Forné; Esteve Pallarès

Vehicular ad hoc networks (VANETs) have emerged to leverage the power of modern communication technologies, applied to both vehicles and infrastructure. Allowing drivers to report traffic accidents and violations through the VANET may lead to substantial improvements in road safety. However, being able to do so anonymously in order to avoid personal and professional repercussions will undoubtedly translate into user acceptance. The main goal of this work is to propose a new collaborative protocol for enforcing anonymity in multi-hop VANETs, closely inspired by the well-known Crowds protocol. In a nutshell, our anonymous-reporting protocol depends on a forwarding probability that determines whether the next forwarding step in message routing is random, for better anonymity, or in accordance with the routing protocol on which our approach builds, for better quality of service (QoS). Different from Crowds, our protocol is specifically conceived for multi-hop lossy wireless networks. Simulations for residential and downtown areas support and quantify the usefulness of our collaborative strategy for better anonymity, when users are willing to pay an eminently reasonable price in QoS.


DPM'11 Proceedings of the 6th international conference, and 4th international conference on Data Privacy Management and Autonomous Spontaneus Security | 2011

A privacy-protecting architecture for collaborative filtering via forgery and suppression of ratings

Javier Parra-Arnau; David Rebollo-Monedero; Jordi Forné

Recommendation systems are information-filtering systems that help users deal with information overload. Unfortunately, current recommendation systems prompt serious privacy concerns. In this work, we propose an architecture that protects user privacy in such collaborative-filtering systems, in which users are profiled on the basis of their ratings. Our approach capitalizes on the combination of two perturbative techniques, namely the forgery and the suppression of ratings. In our scenario, users rate those items they have an opinion on. However, in order to avoid privacy risks, they may want to refrain from rating some of those items, and/or rate some items that do not reflect their actual preferences. On the other hand, forgery and suppression may degrade the quality of the recommendation system. Motivated by this, we describe the implementation details of the proposed architecture and present a formulation of the optimal trade-off among privacy, forgery rate and suppression rate. Finally, we provide a numerical example that illustrates our formulation.Recommendation systems are information-filtering systems that help users deal with information overload. Unfortunately, current recommendation systems prompt serious privacy concerns. In this work, we propose an architecture that protects user privacy in such collaborative-filtering systems, in which users are profiled on the basis of their ratings. Our approach capitalizes on the combination of two perturbative techniques, namely the forgery and the suppression of ratings. In our scenario, users rate those items they have an opinion on. However, in order to avoid privacy risks, they may want to refrain from rating some of those items, and/or rate some items that do not reflect their actual preferences. On the other hand, forgery and suppression may degrade the quality of the recommendation system. Motivated by this, we describe the implementation details of the proposed architecture and present a formulation of the optimal trade-off among privacy, forgery rate and suppression rate. Finally, we provide a numerical example that illustrates our formulation.


Security and Communication Networks | 2014

On collaborative anonymous communications in lossy networks

David Rebollo-Monedero; Jordi Forné; Esteve Pallarès; Javier Parra-Arnau; Carolina Tripp; Luis Urquiza; Mónica Aguilar

Message encryption does not prevent eavesdroppers from unveiling who is communicating with whom, when, or how frequently, a privacy risk wireless networks are particularly vulnerable to. The Crowds protocol, a well-established anonymous communication system, capitalizes on user collaboration to enforce sender anonymity. This work formulates a mathematical model of a Crowd-like protocol for anonymous communication in a lossy network, establishes quantifiable metrics of anonymity and quality of service (QoS), and theoretically characterizes the trade-off between them. The anonymity metric chosen follows the principle of measuring privacy as an attackers estimation error. By introducing losses, we extend the applicability of the protocol beyond its original proposal. We quantify the intuition that anonymity comes at the expense of both delay and end-to-end losses. Aside from introducing losses in our model, another main difference with respect to the traditional Crowds is the focus on networks with stringent QoS requirements, for best effort anonymity, and the consequent elimination of the initial forwarding step. Beyond the mathematical solution, we illustrate a systematic methodology in our analysis of the protocol. This methodology includes a series of formal steps, from the establishment of quantifiable metrics all the way to the theoretical study of the privacy QoS trade-off. Copyright


Entropy | 2014

Optimal Forgery and Suppression of Ratings for Privacy Enhancement in Recommendation Systems

Javier Parra-Arnau; David Rebollo-Monedero; Jordi Forné

Recommendation systems are information-filtering systems that tailor information to users on the basis of knowledge about their preferences. The ability of these systems to profile users is what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. In this paper we investigate a privacy-enhancing technology that aims at hindering an attacker in its efforts to accurately profile users based on the items they rate. Our approach capitalizes on the combination of two perturbative mechanisms—the forgery and the suppression of ratings. While this technique enhances user privacy to a certain extent, it inevitably comes at the cost of a loss in data utility, namely a degradation of the recommendation’s accuracy. In short, it poses a trade-off between privacy and utility. The theoretical analysis of such trade-off is the object of this work. We measure privacy as the Kullback-Leibler divergence between the user’s and the population’s item distributions, and quantify utility as the proportion of ratings users consent to forge and eliminate. Equipped with these quantitative measures, we find a closed-form solution to the problem of optimal forgery and suppression of ratings, an optimization problem that includes, as a particular case, the maximization of the entropy of the perturbed profile. We characterize the optimal trade-off surface among privacy, forgery rate and suppression rate,and experimentally evaluate how our approach could contribute to privacy protection in a real-world recommendation system.


workshop in information security theory and practice | 2009

PKIX Certificate Status in Hybrid MANETs

Jose L. Muñoz; Oscar Esparza; Carlos Gañán; Javier Parra-Arnau

Certificate status validation is a hard problem in general but it is particularly complex in Mobile Ad-hoc Networks (MANETs) because we require solutions to manage both the lack of fixed infrastructure inside the MANET and the possible absence of connectivity to trusted authorities when the certification validation has to be performed. In this sense, certificate acquisition is usually assumed as an initialization phase. However, certificate validation is a critical operation since the node needs to check the validity of certificates in real-time, that is, when a particular certificate is going to be used. In such MANET environments, it may happen that the node is placed in a part of the network that is disconnected from the source of status data at the moment the status checking is required. Proposals in the literature suggest the use of caching mechanisms so that the node itself or a neighbour node has some status checking material (typically on-line status responses or lists of revoked certificates). However, to the best of our knowledge the only criterion to evaluate the cached (obsolete) material is the time. In this paper, we analyse how to deploy a certificate status checking PKI service for hybrid MANET and we propose a new criterion based on risk to evaluate cached status data that is much more appropriate and absolute than time because it takes into account the revocation process.

Collaboration


Dive into the Javier Parra-Arnau's collaboration.

Top Co-Authors

Avatar

David Rebollo-Monedero

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jordi Forné

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Esteve Pallarès

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jose L. Muñoz

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Oscar Esparza

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Claudia Diaz

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Ana Rodriguez-Hoyos

National Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Gañán

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge