Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jerry den Hartog is active.

Publication


Featured researches published by Jerry den Hartog.


ieee computer security foundations symposium | 2010

Formal Verification of Privacy for RFID Systems

M Mayla Brusò; Konstantinos Chatzikokolakis; Jerry den Hartog

RFID tags are being widely employed in a variety of applications, ranging from barcode replacement to electronic passports. Their extensive use, however, in combination with their wireless nature, introduces privacy concerns as a tag could leak information about the owner’s behaviour. In this paper we define two privacy notions, unlinkability and forward privacy, using a formal model based on the applied pi calculus, and we show the relationship between them. Then we focus on a generic class of simple privacy protocols, giving sufficient and necessary conditions for unlinkability and forward privacy for this class. These conditions are based on the concept of frame independence that we develop in this paper. Finally, we apply our techniques to two identification protocols, formally proving their privacy guarantees.


principles of security and trust | 2015

Analysis of XACML Policies with SMT

Fatih Turkmen; Jerry den Hartog; Silvio Ranise; Nicola Zannone

The eXtensible Access Control Markup Language XACML is an extensible and flexible XML language for the specification of access control policies. However, the richness and flexibility of the language along with the verbose syntax of XML come with a price: errors are easy to make and difficult to detect when policies grow in size. If these errors are not detected and rectified, they can result in serious data leakage and/or privacy violations leading to significant legal and financial consequences. To assist policy authors in the analysis of their policies, several policy analysis tools have been proposed based on different underlying formalisms. However, most of these tools either abstract away functions over non-Boolean domains hence they cannot provide information about them or produce very large encodings which hinder the performance. In this paper, we present a generic policy analysis framework that employs SMT as the underlying reasoning mechanism. The use of SMT does not only allow more fine-grained analysis of policies but also improves the performance. We demonstrate that a wide range of security properties proposed in the literature can be easily modeled within the framework. A prototype implementation and its evaluation are also provided.


international conference on selected areas in cryptography | 2010

Improving DPA by peak distribution analysis

Jing Pan; Jasper G. J. van Woudenberg; Jerry den Hartog; Marc F. Witteman

Differential Power Analysis (DPA) attacks extract secret key information from cryptographic devices by comparing power consumption with predicted values based on key candidates and looking for peaks which indicate a correct prediction. A general obstacle in the use of DPA is the occurrence of so called ghost peaks, which may appear when evaluating incorrect key candidates. Some ghost peaks can be expected from the structure and may actually leak information. We introduce a DPA enhancement technique--Euclidean Differential Power Analysis (EDPA), which makes use of the information leaked by the ghost peaks to diminish the ghost peaks themselves and bring forward the correct key candidate. The EDPA can be combined with any standard DPA attack irrespective of the distinguisher used. We illustrate that EDPA improves on DPA with both simulations and experiments on smart cards.


workshop on privacy in the electronic society | 2012

A machine learning solution to assess privacy policy completeness: (short paper)

Elisa Costante; Yuanhao Sun; Milan Petkovic; Jerry den Hartog

A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.


collaboration technologies and systems | 2014

CollAC: Collaborative access control

S Stan Damen; Jerry den Hartog; Nicola Zannone

Recent years have seen an increasing number of collaborative systems and platforms available online. As the importance of online collaboration grows, so does the need to protect the information used in these systems. In collaborative environments different users may be related to the information in different capacities. This means that traditional access control mechanisms are usually not suitable for collaborative environments as they assume that a single entity is in control of information. Moreover, when different entities can concurrently specify policies for the same resources, the decision making process is not transparent to the users who expect their policies to be enforced by the system. In this paper, we introduce a novel access control framework for collaborative systems. The framework is based on a notion of control which goes beyond the notion of ownership by accounting for the relation of a user with an object. We also make access control decisions transparent by showing where and why collaborative decisions deviate from the policies of single users.


Frontiers in ICT | 2015

SAFAX – An Extensible Authorization Service for Cloud Environments

Samuel Paul Kaluvuri; Ai Alexandru-Ionut Egner; Jerry den Hartog; Nicola Zannone

Cloud storage services have become increasingly popular in recent years. Users are often registered to multiple cloud storage services that suit different needs. However, the ad-hoc manner in which data sharing between users is implemented leads to issues for these users. For instance, users are required to define different access control policies for each cloud service they use and are responsible for synchronizing their policies across different cloud providers. Users do not have access to a uniform and expressive method to deal with authorization. Current authorization solutions cannot be applied as-is, since they cannot cope with challenges specific to cloud environments. In this paper, we analyze the challenges of data sharing in multi-cloud environments and propose SAFAX, an XACML based authorization service designed to address these challenges. SAFAXs architecture allows users to deploy their access control policies in a standard format, in a single location, and augment policy evaluation with information from user selectable external trust services. We describe the architecture of SAFAX, a prototype implementation based on this architecture, illustrate the extensibility through external trust services and discuss the benefits of using SAFAX from both the users and cloud providers perspectives.


ieee symposium on security and privacy | 2010

Towards Static Flow-Based Declassification for Legacy and Untrusted Programs

B Bruno Pontes Soares Rocha; S Bandhakavi; Jerry den Hartog; William H. Winsborough; Sandro Etalle

Simple non-interference is too restrictive for specifying and enforcing information flow policies in most programs. Exceptions to non-interference are provided using declassification policies. Several approaches for enforcing declassification have been proposed in the literature. In most of these approaches, the declassification policies are embedded in the program itself or heavily tied to the variables in the program being analyzed, thereby providing little separation between the code and the policy. Consequently, the previous approaches essentially require that the code be trusted, since to trust that the correct policy is being enforced, we need to trust the source code. In this paper, we propose a novel framework in which declassification policies are related to the source code being analyzed via its I/O channels. The framework supports many of the of declassification policies identified in the literature. Based on flow-based static analysis, it represents a first step towards a new approach that can be applied to untrusted and legacy source code to automatically verify that the analyzed program complies with the specified declassification policies. The analysis works by constructing a conservative approximation of expressions over input channel values that could be output by the program, and by determining whether all such expressions satisfy the declassification requirements stated in the policy. We introduce a representation of such expressions that resembles tree automata. We prove that if a program is considered safe according to our analysis then it satisfies a property we call Policy Controlled Release, which formalizes information-flow correctness according to our notion of declassification policy. We demonstrate, through examples, that our approach works for several interesting and useful declassification policies, including one involving declassification of the average of several confidential values.


2011 1st Workshop on Socio-Technical Aspects in Security and Trust (STAST) | 2011

On-line trust perception: What really matters

Elisa Costante; Jerry den Hartog; Milan Petkovic

Trust is an essential ingredient in our daily activities. The fact that these activities are increasingly carried out using the large number of available services on the Internet makes it necessary to understand how users perceive trust in the online environment. A wide body of literature concerning trust perception and ways to model it already exists. A trust perception model generally lists a set of factors influencing a person trusting another person, a computer, or a website. Different models define different set of factors, but a single unifying model, applicable to multiple scenarios in different settings, is still missing. Moreover, there are no conclusions on the importance each factor has on trust perception. In this paper, we review the existing literature and provide a general trust perception model, which is able to measure the trustworthiness of a website. Such a model takes into account a comprehensive set of trust factors, ranking them based on their importance, and can be easily adapted to different application domains. A user study has been used to determine the importance, or weight, of each factor. The results of the study show evidence that such weight differs from one application domain (e.g. e-banking or e-health) to another. We also demonstrate that the weight of certain factors is related to the users knowledge in the IT Security field. This paper constitutes a first step towards the ability to measure the trustworthiness of a website, helping developers to create more trustworthy websites, and users to make their trust decisions when using on-line services.


DPM/SETOP | 2012

What Websites Know About You

Elisa Costante; Jerry den Hartog; Milan Petkovic

The need for privacy protection on the Internet is well recognized. Everyday users are asked to release personal information in order to use online services and applications. Service providers do not always need all the data they gather to be able to offer a service. Thus users should be aware of what data is collected by a provider to judge whether this is too much for the services offered. Providers are obliged to describe how they treat personal data in privacy policies. By reading the policy users could discover, amongst others, what personal data they agree to give away when choosing to use a service. Unfortunately, privacy policies are long legal documents that users notoriously refuse to read. In this paper we propose a solution which automatically analyzes privacy policy text and shows what personal information is collected. Our solution is based on the use of Information Extraction techniques and represents a step towards the more ambitious aim of automated grading of privacy policies.


ieee symposium on security and privacy | 2016

A Hybrid Framework for Data Loss Prevention and Detection

Elisa Costante; Davide Fauri; Sandro Etalle; Jerry den Hartog; Nicola Zannone

Data loss, i.e. the unauthorized/unwanted disclosure of data, is a major threat for modern organizations. Data Loss Protection (DLP) solutions in use nowadays, either employ patterns of known attacks (signature-based) or try to find deviations from normal behavior (anomaly-based). While signature-based solutions provide accurate identification of known attacks and, thus, are suitable for the prevention of these attacks, they cannot cope with unknown attacks, nor with attackers who follow unusual paths (like those known only to insiders) to carry out their attack. On the other hand, anomaly-based solutions can find unknown attacks but typically have a high false positive rate, limiting their applicability to the detection of suspicious activities. In this paper, we propose a hybrid DLP framework that combines signature-based and anomaly-based solutions, enabling both detection and prevention. The framework uses an anomaly-based engine that automatically learns a model of normal user behavior, allowing it to flag when insiders carry out anomalous transactions. Typically, anomaly-based solutions stop at this stage. Our framework goes further in that it exploits an operators feedback on alerts to automatically build and update signatures of attacks that are used to timely block undesired transactions before they can cause any damage.

Collaboration


Dive into the Jerry den Hartog's collaboration.

Top Co-Authors

Avatar

Nicola Zannone

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisa Costante

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

M Mayla Brusò

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Davide Fauri

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ai Alexandru-Ionut Egner

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Duc Luu

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fred Spiessens

Eindhoven University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge