Gergely Acs
Budapest University of Technology and Economics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gergely Acs.
security of ad hoc and sensor networks | 2005
Gergely Acs; Levente Buttyán; István Vajda
In this paper, we propose a framework for the security analysis of on-demand, distance vector routing protocols for ad hoc networks, such as AODV, SAODV, and ARAN. The proposed approach is an adaptation of the simulation paradigm that is used extensively for the analysis of cryptographic algorithms and protocols, and it provides a rigorous method for proving that a given routing protocol is secure. We demonstrate the approach by representing known and new attacks on SAODV in our framework, and by proving that ARAN is secure in our model.
mobile adhoc and sensor systems | 2007
Gergely Acs; Levente Buttyán; István Vajda
In this paper, we present a flexible and mathematically rigorous modeling framework for analyzing the security of sensor network routing protocols. Then, we demonstrate the usage of this framework by formally proving that INSENS (Intrusion-Tolerant Routing in Wireless Sensor Networks), which is a secure sensor network routing protocol proposed in the literature independently of our work, can be proven to be secure in our model.
world of wireless mobile and multimedia networks | 2010
Gergely Acs; Levente Buttyán; László Dóra
In this paper, we address the problem of detecting misbehaving routers in wireless mesh networks and avoiding them when selecting routes. We assume that link-state routing is used, and we essentially propose a reputation system, where trusted gateway nodes compute Node Trust Values for the routers, which are fed back into the system and used in the route selection procedure. The computation of the Node Trust Values is based on packet counters maintained in association with each route and reported to the gateways by the routers in a regular manner. The feedback mechanism is based on limited scope flooding. The received Node Trust Values concerning a given router are aggregated, and the aggregate trust value of the router determines the probability with which that router is kept in the topology graph used for route computation. Hence, less trusted routers are excluded from the topology graph with higher probability, while the route selection still runs on a weighted graph (where the weights are determined by the announced link qualities), and it does not need to be changed. We evaluated the performance of our solution by means of simulations. The results show that our proposed mechanism can detect misbehaving routers reliably, and thanks to the feedback and the exclusion of the accused nodes from the route selection, we can decrease the number of packets dropped due to router misbehavior considerably. At the same time, our mechanism only slightly increases the average route length.
privacy enhancing technologies | 2016
Gábor György Gulyás; Gergely Acs; Claude Castelluccia
Abstract Several recent studies have demonstrated that people show large behavioural uniqueness. This has serious privacy implications as most individuals become increasingly re-identifiable in large datasets or can be tracked, while they are browsing the web, using only a couple of their attributes, called as their fingerprints. Often, the success of these attacks depends on explicit constraints on the number of attributes learnable about individuals, i.e., the size of their fingerprints. These constraints can be budget as well as technical constraints imposed by the data holder. For instance, Apple restricts the number of applications that can be called by another application on iOS in order to mitigate the potential privacy threats of leaking the list of installed applications on a device. In this work, we address the problem of identifying the attributes (e.g., smartphone applications) that can serve as a fingerprint of users given constraints on the size of the fingerprint. We give the best fingerprinting algorithms in general, and evaluate their effectiveness on several real-world datasets. Our results show that current privacy guards limiting the number of attributes that can be queried about individuals is insufficient to mitigate their potential privacy risks in many practical cases.
international conference on big data | 2015
Gergely Acs; Jagdish Prasad Achara; Claude Castelluccia
Set-valued dataset contains different types of items/values per individual, for example, visited locations, purchased goods, watched movies, or search queries. As it is relatively easy to re-identify individuals in such datasets, their release poses significant privacy threats. Hence, organizations aiming to share such datasets must adhere to personal data regulations. In order to get rid of these regulations and also to beneit from sharing, these datasets should be anonymized before their release. In this paper, we revisit the problem of anonymizing set-valued data. We argue that anonymization techniques targeting traditional km-anonymity model, which limits the adversarial background knowledge to at most m items per individual, are impractical for large real-world datasets. Hence, we propose a probabilistic relaxation of km-anonymity and present an anonymization technique to achieve it. This relaxation also improves the utility of the anonymized data. We also demonstrate the effectiveness of our scalable anonymization technique on a real-world location dataset consisting of more than 4 million subscribers of a large European telecom operator. We believe that our technique can be very appealing for practitioners willing to share such large datasets.
Archive | 2018
Gergely Acs; Gergely Biczók; Claude Castelluccia
In today’s digital society, increasing amounts of contextually rich spatio-temporal information are collected and used, e.g., for knowledge-based decision making, research purposes, optimizing operational phases of city management, planning infrastructure networks, or developing timetables for public transportation with an increasingly autonomous vehicle fleet. At the same time, however, publishing or sharing spatio-temporal data, even in aggregated form, is not always viable owing to the danger of violating individuals’ privacy, along with the related legal and ethical repercussions. In this chapter, we review some fundamental approaches for anonymizing and releasing spatio-temporal density, i.e., the number of individuals visiting a given set of locations as a function of time. These approaches follow different privacy models providing different privacy guarantees as well as accuracy of the released anonymized data. We demonstrate some sanitization (anonymization) techniques with provable privacy guarantees by releasing the spatio-temporal density of Paris, in France. We conclude that, in order to achieve meaningful accuracy, the sanitization process has to be carefully customized to the application and public characteristics of the spatio-temporal data.
IEEE Transactions on Mobile Computing | 2006
Gergely Acs; Levente Buttyán; István Vajda
Archive | 2007
Gergely Acs; Levente Buttyán
security of ad hoc and sensor networks | 2006
Gergely Acs; Levente Buttyán; István Vajda
arXiv: Cryptography and Security | 2012
Gergely Acs; Claude Castelluccia