Meilof Veeningen
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Meilof Veeningen.
international conference on information systems security | 2011
Meilof Veeningen; Benne de Weger; Nicola Zannone
Over the years, formal methods have been developed for the analysis of security and privacy aspects of communication in IT systems. However, existing methods are insufficient to deal with privacy, especially in identity management (IdM), as they fail to take into account whether personal information can be linked to its data subject. In this paper, we propose a general formal method to analyze privacy of communication protocols for IdM. To express privacy, we represent knowledge of personal information in a three-layer model. We show how to deduce knowledge from observed messages and how to verify a range of privacy properties. We validate the approach by applying it to an IdM case study.
formal aspects in security and trust | 2010
Meilof Veeningen; Bmm Benne de Weger; Nicola Zannone
In the last years several attempts to define identity-related properties such as identifiability, pseudonymity and anonymity have been made to analyze the privacy offered by information systems and protocols. However, these definitions are generally incomparable, making it difficult to generalize the results of their analysis. In this paper, we propose a novel framework for formalizing and comparing identity-related properties. The framework employs the notions of detectability, associability and provability to assess the knowledge of an adversary. We show how these notions can be used to specify well-known identity-related properties and classify them with respect to their logical relations and privacy strength. We also demonstrate that the proposed framework is able to capture and compare several existing definitions of identity-related properties.
Revised Selected Papers of the 8th International Workshop on Data Privacy Management and Autonomous Spontaneous Security - Volume 8247 | 2013
Meilof Veeningen; Antonio Piepoli; Nicola Zannone
More and more personal information is available digitally, both collected by organisations and published by individuals. People may attempt to protect their privacy by avoiding to provide uniquely identifying information and by providing different information in different places; however, in many cases, such profiles can still be de-anonymised. Techniques from the record linkage literature can be used for pairwise linking of databases, and for cross-correlation based on these pairwise results. However, the privacy implications of these techniques in the on-line setting are not clear: existing experiments depend on quasi-identifiers and do not focus on cross-correlation. This paper studies the problem of de-anonymisation and, in particular, cross-correlation of multiple databases using only non-identifying information in an on-line setting.
applied cryptography and network security | 2016
Berry Schoenmakers; Meilof Veeningen; Niels de Vreede
Verifiable computation allows a client to outsource computations to a worker with a cryptographic proof of correctness of the result that can be verified faster than performing the computation. Recently, the highly efficient Pinocchio system was introduced as a major leap towards practical verifiable computation. Unfortunately, Pinocchio and other efficient verifiable computation systems require the client to disclose the inputs to the worker, which is undesirable for sensitive inputs. To solve this problem, we propose Trinocchio: a system that distributes Pinocchio to three (or more) workers, that each individually do not learn which inputs they are computing on. We fully exploit the almost linear structure of Pinochhio proofs, letting each worker essentially perform the work for a single Pinocchio proof; verification by the client remains the same. Moreover, we extend Trinocchio to enable joint computation with multiple mutually distrusting inputters and outputters and still very fast verification. We show the feasibility of our approach by analysing the performance of an implementation in a case study.
international workshop on security | 2012
Meilof Veeningen; Bmm Benne de Weger; Nicola Zannone
In recent years, a number of infrastructures have been proposed for the collection and distribution of medical data for research purposes. The design of such infrastructures is challenging: on the one hand, they should link patient data collected from different hospitals; on the other hand, they can only use anonymised data because of privacy regulations. In addition, they should allow data depseudonymisation in case research results provide information relevant for patients’ health. The privacy analysis of such infrastructures can be seen as a problem of data minimisation. In this work, we introduce coalition graphs, a graphical representation of knowledge of personal information to study data minimisation. We show how this representation allows identification of privacy issues in existing infrastructures. To validate our approach, we use coalition graphs to formally analyse data minimisation in two (de)-pseudonymisation infrastructures proposed by the Parelsnoer initiative.
international conference on progress in cryptology | 2016
Sebastiaan de Hoogh; Berry Schoenmakers; Meilof Veeningen
For many applications of secure multiparty computation it is natural to demand that the output of the protocol is verifiable. Verifiability should ensure that incorrect outputs are always rejected, even if all parties executing the secure computation collude. Since the inputs to a secure computation are private, and potentially the outputs are private as well, adding verifiability is in general hard and costly. In this paper we focus on privacy-preserving linear programming as a typical and practically relevant case for verifiable secure multiparty computation. We introduce certificate validation as an effective technique for achieving verifiable linear programming. Rather than verifying the computation proper, which involves many iterations of the simplex algorithm, we extend the output of the secure computation with a certificate. The certificate allows for efficient and direct validation of the correctness of the output. The overhead incurred by the computation of the certificate is marginal. For the validation of a certificate we design particularly efficient distributed-prover zero-knowledge proofs, fully exploiting the fact that we can use ElGamal encryption for this purpose, hence avoiding the use of more elaborate cryptosystems such as Paillier encryption. We also formulate appropriate security definitions for our approach, and prove security for our protocols in this model, paying special attention to ensuring properties such as input independence. By means of several experiments performed in a real multi-cloud-provider environment, we show that the overall performance for verifiable linear programming is very competitive, incurring minimal overhead compared to protocols providing no correctness guarantees at all.
international conference for internet technology and secured transactions | 2015
Daniel Pletea; Saeed Sedghi; Meilof Veeningen; Milan Petkovic
Nowadays usage of cloud computing is increasing in popularity and this raises new data protection challenges. In such distributed systems it is unrealistic to assume that the servers are fully trusted in enforcing the access policies. Attribute Based Encryption (ABE) is one of the solutions proposed to tackle these trust problems. In ABE the data is encrypted using the access policy and authorized users can decrypt the data only using a secret key that is associated with their attributes. The secret key is generated by a Key Generation Authority (KGA), which in small systems can be constantly audited, therefore fully trusted. In contrast, in large and distrusted systems, trusting the KGAs is questionable. This paper presents a solution which increases the trust in ABE KGAs. The solution uses several KGAs which issue secret keys only for a limited number of users. One KGA issues a secret key associated with users attributes and the other authorities issue independently secret keys associated with generalized values of users attributes. Decryption is possible only if the secret keys associated with the non-generalized and generalized attributes are consistent. This mitigates the risk of unauthorized data disclosure when a couple of authorities are compromised.
International Journal of Information Security | 2014
Meilof Veeningen; Bmm Benne de Weger; Nicola Zannone
With the growing amount of personal information exchanged over the Internet, privacy is becoming more and more a concern for users. One of the key principles in protecting privacy is data minimisation. This principle requires that only the minimum amount of information necessary to accomplish a certain goal is collected and processed. “Privacy-enhancing” communication protocols have been proposed to guarantee data minimisation in a wide range of applications. However, currently, there is no satisfactory way to assess and compare the privacy they offer in a precise way: existing analyses are either too informal and high level or specific for one particular system. In this work, we propose a general formal framework to analyse and compare communication protocols with respect to privacy by data minimisation. Privacy requirements are formalised independent of a particular protocol in terms of the knowledge of (coalitions of) actors in a three-layer model of personal information. These requirements are then verified automatically for particular protocols by computing this knowledge from a description of their communication. We validate our framework in an identity management (IdM) case study. As IdM systems are used more and more to satisfy the increasing need for reliable online identification and authentication, privacy is becoming an increasingly critical issue. We use our framework to analyse and compare four identity management systems. Finally, we discuss the completeness and (re)usability of the proposed framework.
international conference on trust management | 2013
Meilof Veeningen; Benne de Weger; Nicola Zannone
More and more personal information is exchanged on-line using communication protocols. This makes it increasingly important that such protocols satisfy privacy by data minimisation. Formal methods have been used to verify privacy properties of protocols; but so far, mostly in an ad-hoc way. In previous work, we provided general definitions for the fundamental privacy concepts of linkability and detectability. However, this approach is only able to verify privacy properties for given protocol instances. In this work, by generalising the approach, we formally analyse privacy of communication protocols independently from any instance. We implement the model; identify its assumptions by relating it to the instantiated model; and show how to visualise results. To demonstrate our approach, we analyse privacy in Identity Mixer.
computer and communications security | 2013
Meilof Veeningen; M Mayla Brusò; Jerry den Hartog; Nicola Zannone
Systems dealing with personal information are legally required to satisfy the principle of data minimisation. Privacy-enhancing protocols use cryptographic primitives to minimise the amount of personal information exposed by communication. However, the complexity of these primitives and their interplay makes it hard for non-cryptography experts to understand the privacy implications of their use. In this paper, we present TRIPLEX, a framework for the analysis of data minimisation in privacy-enhancing protocols.