Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nick Papanikolaou is active.

Publication


Featured researches published by Nick Papanikolaou.


IFIP PrimeLife International Summer School on Privacy and Identity Management for Life | 2010

A Conceptual Model for Privacy Policies with Consent and Revocation Requirements

Marco Casassa Mont; Siani Pearson; Sadie Creese; Michael Goldsmith; Nick Papanikolaou

This paper proposes a conceptual model for privacy policies that takes into account privacy requirements arising from different stakeholders, with legal, business and technical backgrounds. Current approaches to privacy management are either high-level, enforcing privacy of personal data using legal compliance, risk and impact assessments, or low-level, focusing on the technical implementation of access controls to personal data held by an enterprise. High-level approaches tend to address privacy as an afterthought in ordinary business practice, and involve ad hoc enforcement practices; low-level approaches often leave out important legal and business considerations focusing solely on technical management of privacy policies. Hence, neither is a panacea and the low level approaches are often not adopted in real environments. Our conceptual model provides a means to express privacy policy requirements as well as users’ privacy preferences. It enables structured reasoning regarding containment and implementation between various policies at the high level, and enables easy traceability into the low-level policy implementations. Thus it offers a means to reason about correctness that links low-level privacy management mechanisms to stakeholder requirements, thereby encouraging exploitation of the low-level methods. We also present the notion of a consent and revocation policy. A consent and revocation policy is different from a privacy policy in that it defines not enterprise practices with regards to personal data, but more specifically, for each item of personal data held by an enterprise, what consent preferences a user may express and to what degree, and in what ways he or she can revoke their personal data. This builds on earlier work on defining the different forms of revocation for personal data, and on formal models of consent and revocation processes. The work and approach discussed in this paper is currently carried out in the context of the UK collaborative project EnCoRe (Ensuring Consent and Revocation).


ieee international conference on cloud computing technology and science | 2013

Privacy Risk, Security, Accountability in the Cloud

Marianthi Theoharidou; Nick Papanikolaou; Siani Pearson; Dimitris Gritzalis

Migrating data, applications or services to the cloud exposes a business to a number of new threats and vulnerabilities, which need to be properly assessed. Assessing privacy risk in cloud environments remains a complex challenge, mitigation of this risk requires trusting a cloud service provider to implement suitable privacy controls. Furthermore, auditors and authorities need to be able to hold service providers accountable for their actions, enforcing rules and regulations through penalties and other mechanisms, and ensuring that any problems are remedied promptly and adequately. This paper examines privacy risk assessment for cloud, and identifies threats, vulnerabilities and countermeasures that clients and providers should implement in order to achieve privacy compliance and accountability.


IFIP PrimeLife International Summer School on Privacy and Identity Management for Life | 2009

Reaching for Informed Revocation: Shutting Off the Tap on Personal Data

Ioannis Agrafiotis; Sadie Creese; Michael Goldsmith; Nick Papanikolaou

We introduce a revocation model for handling personal data in cyberspace. The model is motivated by a series of focus groups undertaken by the EnCoRe project aimed at understanding the control requirements of a variety of data subjects. We observe that there is a lack of understanding of the various technical options available for implementing revocation preferences, and introduce the concept of informed revocation by analogy to Faden and Beauchamp’s informed consent. We argue that we can overcome the limitations associated with informed consent via the implementation of EnCoRe technology solutions. Finally, we apply our model and demonstrate its validity to a number of data-handling scenarios which have arisen in the context of the EnCoRe research project. We have found that data subjects tend to alter their default privacy preferences when they are informed of all the different types of revocation available to them.


ieee international conference on cloud computing technology and science | 2014

A toolkit for automating compliance in cloud computing services

Nick Papanikolaou; Siani Pearson; Marco Casassa Mont; Ryan K.L. Ko

We present an integrated approach for automating service providers’ compliance with data protection laws and regulations, business and technical requirements in cloud computing. The techniques we propose in particular include: natural language analysis (of legislative and regulatory texts, and corporate security rulebooks) and extraction of enforceable rules, use of sticky policies, automated policy enforcement and active monitoring of data, particularly in cloud environments. We currently work on developing a software tool for semantic annotation and natural language processing of cloud ToS and other related policy texts. We describe our implementations of two parts of the proposed toolkit, namely the semantic annotation editor and the EnCoRe policy enforcement framework. We also identify opportunities for future software development in the area of cloud computing compliance.


IFIP PrimeLife International Summer School on Privacy and Identity Management for Life | 2010

Applying Formal Methods to Detect and Resolve Ambiguities in Privacy Requirements

Ioannis Agrafiotis; Sadie Creese; Michael Goldsmith; Nick Papanikolaou

In this paper, we demonstrate how formal methods can be used to unambiguously express privacy requirements. We focus on requirements for consent and revocation controls in a real world case study that has emerged within the EnCoRe project. We analyse the ambiguities and issues that arise when requirements expressed in natural language are transformed into a formal notation, and propose solutions to address these issues. These ambiguities were brought to our attention only through the use of a formal notation, which we have designed specifically for this purpose.


international workshop on requirements engineering and law | 2013

Mapping legal requirements to IT controls

Travis D. Breaux; David G. Gordon; Nick Papanikolaou; Siani Pearson

Information technology (IT) controls are reusable system requirements that IT managers, administrators and developers use to demonstrate compliance with international standards, such as ISO 27000 standard. As controls are reusable, they tend to cover best practice independently from what specific government laws may require. However, because considerable effort has already been invested by IT companies in linking controls to their existing systems, aligning controls with regulations can yield important savings by avoiding noncompliance or unnecessary redesign. We report the results of a case study to align legal requirements from the U.S. and India that govern healthcare systems with three popular control catalogues: the NIST 800-53, ISO/IEC 27002:2009 and the Cloud Security Alliance CCM v1.3. The contributions include a repeatable protocol for mapping controls, heuristics to explain the types of mappings that may arise, and guidance for addressing incomplete mappings.


FTRA International Conference on Secure and Trust Computing, Data Management, and Application | 2011

Towards Natural-Language Understanding and Automated Enforcement of Privacy Rules and Regulations in the Cloud: Survey and Bibliography

Nick Papanikolaou; Siani Pearson; Marco Casassa Mont

In this paper we survey existing work on automatically processing legal, regulatory and other policy texts for the extraction and representation of privacy knowledge and rules. Our objective is to link and apply some of these techniques to policy enforcement and compliance, to provide a core means of achieving and maintaining customer privacy in an enterprise context, particularly where data is stored and processed in cloud data centres. We sketch our thoughts on how this might be done given the many different, but so far strictly distinct from one another, approaches to natural-language analysis of legal and other prescriptive texts, approaches to knowledge extraction, semantic representation, and automated enforcement of privacy rules.


OTM Confederated International Conferences "On the Move to Meaningful Internet Systems" | 2012

Natural Language Processing of Rules and Regulations for Compliance in the Cloud

Nick Papanikolaou

We discuss ongoing work on developing tools and techniques for understanding natural-language descriptions of security and privacy rules, particularly in the context of cloud computing services. In particular, we present a three-part toolkit for analyzing and processing texts, and enforcing privacy and security rules extracted from those texts. We are interested in developing efficient, accurate technologies to reduce the time spent analyzing and reasoning about new privacy laws and security rules within the enterprise. We describe the tools we have developed for semantic annotation, and also for information extraction - these are specifically intended for analysis of cloud terms of service, and therefore designed to help with self-compliance; however, the techniques involved should be generalizable to other relevant texts, esp. rules and regulations for data protection.


Sigact News | 2010

The space and motion of communicating agents author: Robin Milner publisher: cambridge university press, 2009 isbn 978-0-521-73833-0

Nick Papanikolaou

Milner develops in this book an approach to modeling ubiquitous systems; he begins by observing that modern computing, or rather informatics, is more about communication than it is about calculation. He emphasizes that, as computing devices increasingly pervade our lives, we will need means of understanding how they interoperate. Even though we may understand thoroughly the functionality of any one device or component, we need to be able to reason about the ways in which that device or component interacts with others, and importantly how an entire network of devices can meet high-level goals and requirements including, among several others, security and privacy constraints. An analogy is made in the book between properties of ubiquitous systems and the tonal qualities possessed by an orchestra; like a complex system of agents acting and interacting to achieve some goal, the instruments in an orchestra combine in various subtle ways to produce the overall sound. There are qualities of the overall sound which may be translated, or reduced, to qualities of individual instruments, or subgroups of instruments. A ubiquitous system comprises thousands of components, sensors, agents, all of which operate in unison, so the analogy to an orchestra works but there is a difference of scale. While methods for understanding complex systems exist in the natural sciences, it is still a challenge to develop sufficiently general informatic modeling techniques. Such techniques will be essential in order to design and analyze the information systems of tomorrow, and to this end Milner proposes the bigraph model. The purpose of the model is to express clearly, on one hand, the structure of ubiquitous systems. This is very significant as there are likely to be several common structures, or system architectures, that will emerge in practical applications; it will be useful to have means of identifying these common structures and extracting key properties. Furthermore, ubiquitous systems are self-organizing, in that they change their own structure; this poses various design and implementation challenges for humans, who will need ways of visualizing and reasoning about such changes. The two key aspects that needed to be accounted for in order describe these systems are locality and connectivity, or as Milner prefers, placing and linking. A formalism suited to the description


Sigact News | 2012

Review of algorithms and theory of computation handbook by Mikhail J. Atallah and Marina Blanton

Nick Papanikolaou

This pair of volumes is an extensive compendium of survey articles on issues of current importance in theoretical CS research. Without a shadow of a doubt it is a much needed resource for any theory researcher, as well as serving as a detailed introduction to the field for non-specialists. The first volume covers algorithms and algorithmic techniques, aspects of data structures, complexity theory, formal languages and computability, in addition to having a handful of chapters on related but more specialized topics, such as learning theory, coding theory, parallel and distributed computing. This volume in itself is a treasure of material in core computer science and can serve as a replacement for more specialized texts when only a good foundation is required. The articles are written by experts in the field and are eminently readable. There is a couple of very useful chapters on searching and sorting algorithms, which can be used as a complement and gentle introduction to Knuth’s seminal tome [1] on the subject. But to have, in the very same volume, an equally accessible introduction to convex optimization, and another on simulated annealing, makes for a rich and enjoyable afternoon read. In fact, you may find yourself wondering many times whether you could change research topic or theme, as this volume is rather inspiring! While there is still emphasis on algorithms and algorithmic approaches, Volume II has a broader remit and introduces several application areas, including AI and robotics, cryptography, voting and auction schemes, privacy and databases, computational biology, grid and DNA computing. What a feast! The coverage of cryptography, cryptanalysis and security protocols is particularly extensive, which makes this a useful reference for security research. The chapter on privacy and anonymity is particularly relevant and a very timely review of k-anonymity techniques. In the wake of Narayanan and Shmatikov’s recent award-winning results on the de-anonymisation of the Netflix prize database [2], this material is particularly interesting and should serve as a foundation for further research on anonymity and anonymisation methods. It is not possible to review this pair of volumes in the usual manner, due to their length and the sheer breadth of material. I will provide the table of contents for both volumes in Section 2, however, for readers of this review to get an inkling of the variety therein. Then I will give and justify my opinion of this text, concluding with references.

Collaboration


Dive into the Nick Papanikolaou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Gordon

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge