Jason R. C. Nurse
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jason R. C. Nurse.
ieee symposium on security and privacy | 2014
Jason R. C. Nurse; Oliver Buckley; Philip A. Legg; Michael Goldsmith; Sadie Creese; Gordon R. T. Wright; Monica T. Whitty
The threat that insiders pose to businesses, institutions and governmental organisations continues to be of serious concern. Recent industry surveys and academic literature provide unequivocal evidence to support the significance of this threat and its prevalence. Despite this, however, there is still no unifying framework to fully characterise insider attacks and to facilitate an understanding of the problem, its many components and how they all fit together. In this paper, we focus on this challenge and put forward a grounded framework for understanding and reflecting on the threat that insiders pose. Specifically, we propose a novel conceptualisation that is heavily grounded in insider-threat case studies, existing literature and relevant psychological theory. The framework identifies several key elements within the problem space, concentrating not only on noteworthy events and indicators- technical and behavioural- of potential attacks, but also on attackers (e.g., the motivation behind malicious threats and the human factors related to unintentional ones), and on the range of attacks being witnessed. The real value of our framework is in its emphasis on bringing together and defining clearly the various aspects of insider threat, all based on real-world cases and pertinent literature. This can therefore act as a platform for general understanding of the threat, and also for reflection, modelling past attacks and looking for useful patterns.
2011 Third International Workshop on Cyberspace Safety and Security (CSS) | 2011
Jason R. C. Nurse; Sadie Creese; Michael Goldsmith; Koen Lamberts
Usability is arguably one of the most significant social topics and issues within the field of cybersecurity today. Supported by the need for confidentiality, integrity, availability and other concerns, security features have become standard components of the digital environment which pervade our lives requiring use by novices and experts alike. As security features are exposed to wider cross-sections of the society, it is imperative that these functions are highly usable. This is especially because poor usability in this context typically translates into inadequate application of cybersecurity tools and functionality, thereby ultimately limiting their effectiveness. With this goal of highly usable security in mind, there have been a plethora of studies in the literature focused on identifying security usability problems and proposing guidelines and recommendations to address them. Our paper aims to contribute to the field by consolidating a number of existing design guidelines and defining an initial core list for future reference. Whilst investigating this topic, we take the opportunity to provide an up-to-date review of pertinent cybersecurity usability issues and evaluation techniques applied to date. We expect this research paper to be of use to researchers and practitioners with interest in cybersecurity systems which appreciate the human and social elements of design.
trust security and privacy in computing and communications | 2012
Sadie Creese; Michael Goldsmith; Jason R. C. Nurse; Elizabeth Phillips
Privacy and security within Online Social Networks (OSNs) has become a major concern over recent years. As individuals continue to actively use and engage with these mediums, one of the key questions that arises pertains to what unknown risks users face as a result of unchecked publishing and sharing of content and information in this space. There are numerous tools and methods under development that claim to facilitate the extraction of specific classes of personal data from online sources, either directly or through correlation across a range of inputs. In this paper we present a model which specifically aims to understand the potential risks faced should all of these tools and methods be accessible to a malicious entity. The model enables easy and direct capture of the data extraction methods through the encoding of a data-reachability matrix for which each row represents an inference or data-derivation step. Specifically, the model elucidates potential linkages between data typically exposed on social-media and networking sites, and other potentially sensitive data which may prove to be damaging in the hands of malicious parties, i.e., fraudsters, stalkers and other online and offline criminals. In essence, we view this work as a key method by which we might make cyber risk more tangible to users of OSNs.
Computer Fraud & Security | 2015
Ioannis Agrafiotis; Jason R. C. Nurse; Oliver Buckley; Phil Legg; Sadie Creese; Michael Goldsmith
The threat that insiders pose to businesses, institutions and governmental organisations continues to be of serious concern. Recent industry surveys provide unequivocal evidence to support the significance of this threat and its prevalence in enterprises today. 1 In an attempt to address this challenge, several approaches and systems have been proposed by practitioners and researchers. These focus on defining the insider threat and exploring the human and psychological factors involved, through to the detection and deterrence of these threats via technological and behavioural theories. 2 , 3 , 4 , 5 , 6 Insider threats pose major concerns to businesses, institutions and governmental organisations. Few solutions to this problem consider all the technical, organisational and behavioural aspects. In new research, Ioannis Agrafiotis, Jason RC Nurse, Oliver Buckley, Phil Legg, Sadie Creese and Michael Goldsmith define attack patterns that could be key in assisting insider-threat detection, based on 120 real-world case studies. They present their findings, representing each case study as a series of attack steps and identify common trends between different attacks.
Journal of Trust Management | 2014
Jason R. C. Nurse; Ioannis Agrafiotis; Michael Goldsmith; Sadie Creese; Koen Lamberts
Information is the currency of the digital age – it is constantly communicated, exchanged and bartered, most commonly to support human understanding and decision-making. While the Internet and Web 2.0 have been pivotal in streamlining many of the information creation and dissemination processes, they have significantly complicated matters for users as well. Most notably, the substantial increase in the amount of content available online has introduced an information overload problem, while also exposing content with largely unknown levels of quality, leaving many users with the difficult question of, what information to trust? In this article we approach this problem from two perspectives, both aimed at supporting human decision-making using online information. First, we focus on the task of measuring the extent to which individuals should trust a piece of openly-sourced information (e.g., from Twitter, Facebook or a blog); this considers a range of factors and metrics in information provenance, quality and infrastructure integrity, and the person’s own preferences and opinion. Having calculated a measure of trustworthiness for an information item, we then consider how this rating and the related content could be communicated to users in a cognitively-enhanced manner, so as to build confidence in the information only where and when appropriate. This work concentrates on a range of potential visualisation techniques for trust, with special focus on radar graphs, and draws inspiration from the fields of Human-Computer Interaction (HCI), System Usability and Risk Communication. The novelty of our contribution stems from the comprehensive approach taken to address this very topical problem, ensuring that the trustworthiness of openly-sourced information is adequately measured and effectively communicated to users, thus enabling them to make informed decisions.
ieee international conference on technologies for homeland security | 2013
Sadie Creese; Thomas Gibson-Robinson; Michael Goldsmith; Duncan Hodges; Dee Kim; Oriana J. Love; Jason R. C. Nurse; Bill Pike; Jean Scholtz
We present two tools for analysing identity in support of homeland security. Both are based upon the Superi-dentity model that brings together cyber and physical spaces into a single understanding of identity. Between them, the tools provide support for defensive, information gathering and capability planning operations. The first tool allows an analyst to explore and understand the model, and to apply it to risk-exposure assessment activities for a particular individual, e.g. an influential person in the intelligence or government community, or a commercial company board member. It can also be used to understand critical capabilities in an organizations identity-attribution process, and so used to plan resource investment. The second tool, referred to as Identity Map, is designed to support investigations requiring enrichment of identities and the making of attributions. Both are currently working prototypes.
computer and communications security | 2016
Tabish Rashid; Ioannis Agrafiotis; Jason R. C. Nurse
The threat that malicious insiders pose towards organisations is a significant problem. In this paper, we investigate the task of detecting such insiders through a novel method of modelling a users normal behaviour in order to detect anomalies in that behaviour which may be indicative of an attack. Specifically, we make use of Hidden Markov Models to learn what constitutes normal behaviour, and then use them to detect significant deviations from that behaviour. Our results show that this approach is indeed successful at detecting insider threats, and in particular is able to accurately learn a users behaviour. These initial tests improve on existing research and may provide a useful approach in addressing this part of the insider-threat challenge.
acm symposium on applied computing | 2016
Richard Everett; Jason R. C. Nurse; Arnau Erola
Technology is rapidly evolving, and with it comes increasingly sophisticated bots (i.e. software robots) which automatically produce content to inform, influence, and deceive genuine users. This is particularly a problem for social media networks where content tends to be extremely short, informally written, and full of inconsistencies. Motivated by the rise of bots on these networks, we investigate the ease with which a bot can deceive a human. In particular, we focus on deceiving a human into believing that an automatically generated sample of text was written by a human, as well as analysing which factors affect how convincing the text is. To accomplish this, we train a set of models to write text about several distinct topics, to simulate a bots behaviour, which are then evaluated by a panel of judges. We find that: (1) typical Internet users are twice as likely to be deceived by automated content than security researchers; (2) text that disagrees with the crowds opinion is more believably human; (3) light-hearted topics such as Entertainment are significantly easier to deceive with than factual topics such as Science; and (4) automated text on Adult content is the most deceptive regardless of a users background.
conference on privacy, security and trust | 2014
Jason R. C. Nurse; Jess Pumphrey; Thomas Gibson-Robinson; Michael Goldsmith; Sadie Creese
Technology is present in every area of our lives and, for many, life without it has become unthinkable. As a consequence of this dependence and the extent to which technology devices (computers, tablets and smartphones) are being used for work and social activities, a clear coupling between devices and their owners can now be observed. By coupling, we specifically refer to the fact that information present on a persons device, be it user-generated or created by the native OS, can produce great insight into their life. In this paper, we look to exploit this coupling to investigate whether connections between technology devices recorded in system log-files, can be used to make inferences about the social relationships between device owners. A key motivation here is to better understand and elucidate the privacy risks associated with the digital footprints that we as humans (often inadvertently) create. Our work draws upon Social Network Analysis and basic Computer Forensics to develop and achieve the inference goals. From our preliminary experimentation, we demonstrate that human social relationships can indeed be inferred even within our limited initial scope. To further investigate the level of privacy exposure from technology-level links, we outline a more comprehensive plan of experimentation that will be conducted in future work.
trust security and privacy in computing and communications | 2013
Jason R. C. Nurse; Ioannis Agrafiotis; Sadie Creese; Michael Goldsmith; Koen Lamberts
In light of the significant amount of information available online today and its potential application to a range of situations, the importance of identifying trustworthy information, and secondly, of building user confidence in that information is paramount. With this in mind, we have developed a novel trustworthiness metric which is designed to provide a relative score based on several key factors that influence trust, such as informations provenance and quality, and the integrity of the infrastructure through which the information passes. In this paper we consider whether providing insight into the various factors that make-up the resulting trustworthiness score actually helps to build trust in the metric itself, and whether users can successfully understand the advice being conveyed. Specifically, we present here the results of experiments which explore whether or not the visual interface that enables users to understand how the metric is composed of a combination of scores, across a range of factors, is a feature they are cognitively able to process, and whether it might help to build confidence in the trustworthiness advice being provided.