Alexandre Viejo
Rovira i Virgili University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandre Viejo.
IEEE Transactions on Vehicular Technology | 2009
Vanesa Daza; Josep Domingo-Ferrer; Francesc Sebé; Alexandre Viejo
Vehicular ad hoc networks (VANETs) allow vehicle-to-vehicle communication and, in particular, vehicle-generated announcements. Provided that the trustworthiness of such announcements can be guaranteed, they can greatly increase the safety of driving. A new system for vehicle-generated announcements is presented that is secure against external and internal attackers attempting to send fake messages. Internal attacks are thwarted by using an endorsement mechanism based on threshold signatures. Our system outperforms previous proposals in message length and computational cost. Three different privacy-preserving variants of the system are also described to ensure that vehicles volunteering to generate and/or endorse trustworthy announcements do not have to sacrifice their privacy.
Computer Networks | 2008
Josep Domingo-Ferrer; Alexandre Viejo; Francesc Sebé; Úrsula González-Nicolás
Enabling private relationships in social networks is an important issue recently raised in the literature. We describe in this paper a new protocol which offers private relationships allowing resource access through indirect relationships without requiring a mediating trusted third party (although an optimistic trusted third party is used which only acts in case of conflict). Thanks to homomorphic encryption, our scheme prevents the resource owner from learning the relationships and trust levels between the users who collaborate in the resource access. In this way, the number of users who might refuse collaboration due to privacy concerns is minimized. This results in increased resource availability, as the chances that certain nodes become isolated at a given period of time are reduced. Empirical evidence is provided about the proposed protocol being scalable and deployable in practical social networks.
Computer Networks | 2010
Alexandre Viejo; Jordi Castellí-Roca
The Internet is one of the most important sources of knowledge in the present time. It offers a huge volume of information which grows dramatically every day. Web search engines (e.g. Google, Yahoo...) are widely used to find specific data among that information. However, these useful tools also represent a privacy threat for the users: the web search engines profile them by storing and analyzing all the searches that they have previously submitted. To address this privacy threat, current solutions propose new mechanisms that introduce a high cost in terms of computation and communication. In this paper, we propose a new scheme designed to protect the privacy of the users from a web search engine that tries to profile them. Our system uses social networks to provide a distorted user profile to the web search engine. The proposed protocol submits standard queries to the web search engine; thus it does not require any change in the server side. In addition to that, this scheme does not require the server to collaborate with the users. Our protocol improves the existing solutions in terms of query delay. Besides, the distorted profiles still allow the users to get a proper service from the web search engines.
Computer Communications | 2011
Roberto Di Pietro; Alexandre Viejo
Due to the wireless nature of communication in sensor networks, the communication patterns between sensors could be leaked regardless of the adoption of encryption mechanisms-those would just protect the message content. However, communication patterns could provide valuable information to an adversary. For instance, this is the case when sensors reply to a query broadcast by a Base Station (BS); an adversary eavesdropping the communication traffic could realize which sensors are the ones that possibly match the query (that is, the ones that replied). This issue is complicated by the severe resource constrained environment WSNs are subject to, that call for efficient and scalable solutions. In this paper, we have addressed the problem of preserving the location privacy of the sensors of a wireless sensor network when they send a reply to a query broadcast by the BS. In particular, we deal with one of the worst scenarios for privacy: When sensors are queried by a BS to provide the MAX of their stored readings. We provide a probabilistic and scalable protocol to compute the MAX that enjoys the following features: (i) it guarantees the location privacy of the sensors replying to the query; (ii) it is resilient to an active adversary willing to alter the readings sent by the sensors; and, (iii) it allows to trade-off the accuracy of the result with (a small) overhead increase. Finally, extensive simulations support our analysis, showing the quality of our proposal.
Journal of Systems and Software | 2011
Arnau Erola; Jordi Castellí-Roca; Alexandre Viejo; Josep María Mateo-Sanz
Abstract: Web search engines (WSE) have become an essential tool for searching information on the Internet. In order to provide personalized search results for the users, WSEs store all the queries which have been submitted by the users and the search results which they have selected. The AOL scandal in 2006 proved that this information contains personally identifiable information which represents a privacy threat for the users who have generated it. In this way, AOL released a file containing twenty million queries made by 658,000 persons and several of those users were successfully tracked. In this paper, we propose a P2P protocol that exploits social networks in order to protect the privacy of the users from the profiling mechanisms of the WSEs. The proposed scheme has been designed considering the presence of users who do not follow the protocol (i.e., adversaries). In order to evaluate the privacy of the users, we have designed a new measure (the profile exposure level (PEL)). Finally, we have used the AOLs file in order to simulate the behavior of our scheme with real queries which have been generated by real users. Our tests show that our scheme is usable in practice and that it preserves the privacy of the users even in the presence of adversaries.
IEEE Transactions on Information Forensics and Security | 2013
David Sánchez; Montserrat Batet; Alexandre Viejo
The advent of new information sharing technologies has led society to a scenario where thousands of textual documents are publicly published every day. The existence of confidential information in many of these documents motivates the use of measures to hide sensitive data before being published, which is precisely the goal of document sanitization. Even though methods to assist the sanitization process have been proposed, most of them are focused on the detection of specific types of sensitive entities for concrete domains, lacking generality and and requiring user supervision. Moreover, to hide sensitive terms, most approaches opt to remove them, a measure that hampers the utility of the sanitized document. This paper presents a general-purpose sanitization method that, based on information theory and exploiting knowledge bases, detects and hides sensitive textual information while preserving its meaning. Our proposal works in an automatic and unsupervised way and it can be applied to heterogeneous documents, which make it specially suitable for environments with massive and heterogeneous information-sharing needs. Evaluation results show that our method outperforms strategies based on trained classifiers regarding the detection recall, whereas it better retains the documents utility compared to term-suppression methods.
Information Sciences | 2013
David Sánchez; Montserrat Batet; Alexandre Viejo
Abstract Text sanitization is crucial to enable privacy-preserving declassification of confidential documents. Moreover, considering the advent of new information sharing technologies that enable the daily publication of thousands of textual documents, automatic and semi-automatic sanitization methods are needed. Even though several of these methods have been proposed, most of them detect and sanitize sensitive terms (e.g., people names, addresses, diseases, etc.) independently, neglecting the importance of semantic correlations. From the attacker’s perspective, semantic correlations can be exploited to disclose a sanitized term from the presence of one or several non-sanitized words. To tackle this problem, this paper presents a general-purpose method that, by taking the output of a standard sanitization mechanism, analyses, detects and proposes for sanitization those semantically correlated terms that represent a plausible disclosure risk for the already sanitized ones. Our method relies on an information-theoretic formulation of disclosure risk which is able to adapt its behavior to the criterion of the initial sanitizer. The evaluation, carried on over a collection of real documents, shows that semantic correlations represent a real privacy threat in prior sanitized documents, and that our method is able to detect them effectively. As a result, the disclosure risk of the sanitized output is significantly reduced with respect to standard sanitization mechanisms.
modeling decisions for artificial intelligence | 2012
David Sánchez; Montserrat Batet; Alexandre Viejo
Whenever a document containing sensitive information needs to be made public, privacy-preserving measures should be implemented. Document sanitization aims at detecting sensitive pieces of information in text, which are removed or hidden prior publication. Even though methods detecting sensitive structured information like e-mails, dates or social security numbers, or domain specific data like disease names have been developed, the sanitization of raw textual data has been scarcely addressed. In this paper, we present a general-purpose method to automatically detect sensitive information from textual documents in a domain-independent way. Relying on the Information Theory and a corpus as large as the Web, it assess the degree of sensitiveness of terms according to the amount of information they provide. Preliminary results show that our method significantly improves the detection recall in comparison with approaches based on trained classifiers.
international conference on security and privacy in communication systems | 2011
Cristina Romero-Tris; Jordi Castellà-Roca; Alexandre Viejo
Web search engines are tools employed to find specific information in the Internet. However, they also represent a threat for the privacy of their users. This happens because the web search engines store and analyze the personal information that the users reveal in their queries. In order to avoid this privacy threat, it is necessary to provide mechanisms that protect the users of these tools.
Information Sciences | 2014
David Sánchez; Montserrat Batet; Alexandre Viejo
Abstract Traditionally, redaction has been the method chosen to mitigate the privacy issues related to the declassification of textual documents containing sensitive data. This process is based on removing sensitive words in the documents prior to their release and has the undesired side effect of severely reducing the utility of the content. Document sanitization is a recent alternative to redaction, which avoids utility issues by generalizing the sensitive terms instead of eliminating them. Some (semi-)automatic redaction/sanitization schemes can be found in the literature; however, they usually neglect the importance of semantic correlations between the terms of the document, even though these may disclose sanitized/redacted sensitive terms. To tackle this issue, this paper proposes a theoretical framework grounded in the Information Theory, which offers a general model capable of measuring the disclosure risk caused by semantically correlated terms, regardless of the fact that they are proposed for removal or generalization. The new method specifically focuses on generating sanitized documents that retain as much utility (i.e., semantics) as possible while fulfilling the privacy requirements. The implementation of the method has been evaluated in a practical setting, showing that the new approach improves the output’s utility in comparison to the previous work, while retaining a similar level of accuracy.