Tristan Caulfield
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tristan Caulfield.
ieee symposium on security and privacy | 2015
Tristan Caulfield; David J. Pym
A rigorous methodology, grounded in mathematical systems modeling and the economics of decision making, can help security managers explore the operational consequences of their design choices and make better decisions.
decision and game theory for security | 2017
Tristan Caulfield; Christos Ioannidis; David J. Pym
The U.S. Vulnerabilities Equities Process (VEP) is used by the government to decide whether to retain or disclose zero day vulnerabilities that the government possesses. There are costs and benefits to both actions: disclosing the vulnerability allows the vulnerability to be patched and systems to be made more secure, while retaining the vulnerability allows the government to conduct intelligence, offensive national security, and law enforcement activities. While redacted documents give some information about the organization of the VEP, very little is publicly known about the decision-making process itself, with most of the detail about the criteria used coming from a blog post by Michael Daniel, the former White House Cybersecurity Coordinator. Although the decision to disclose or retain a vulnerability is often considered a binary choice—to either disclose or retain—it should actually be seen as a decision about timing: to determine when to disclose. In this paper, we present a model that shows how the criteria could be combined to determine the optimal time for the government to disclose a vulnerability, with the aim of providing insight into how a more formal, repeatable decision-making process might be achieved. We look at how the recent case of the WannaCry malware, which made use of a leaked NSA zero day exploit, EternalBlue, can be interpreted using the model.
financial cryptography | 2016
Tristan Caulfield; Christos Ioannidis; David J. Pym
We introduce a model for examining the factors that lead to the adoption of new encryption technologies. Building on the work of Brock and Durlauf, the model describes how agents make choices, in the presence of social interaction, between competing technologies given their relative cost, functionality, and usability. We apply the model to examples about the adoption of encryption in communication (email and messaging) and storage technologies (self-encrypting drives) and also consider our model’s predictions for the evolution of technology adoption over time.
Proceedings of the 6th Workshop on Socio-Technical Aspects in Security and Trust | 2016
Tristan Caulfield; Simon Parkin
We investigate a planned physical security intervention at a partner organisation site, to determine the potential individual cost of security upon employees when replacing a secure door with a turnstile. Systems modelling techniques are applied to model the lobby area of the site, and to guide data collection to situate the model. Managers at the site were consulted during preference elicitation to identify meaningful model parameters. Direct observation of regular employee behaviours from pre-recorded CCTV footage provided localised data: 1800 sequences of behaviour events were logged over one working day for approximately 600 employees and visitors. This included responses to security events, such as returning to the card reader or moving to a different turnstile. Model results showed that if one turnstile was implemented at the observed site, an average of 0.5 seconds would be added to individual entry times for employees, amounting to over sixty hours for the site as a whole over a year. Three turnstiles approach the time cost of a secure door.
Journal of Cybersecurity | 2015
Tristan Caulfield; Andrew Fielder
The presence of unpatched, exploitable vulnerabilities in software is a prerequisite for many forms of cyberattack. Because of the almost inevitable discovery of a vulnerability and creation of an exploit for all types of software, multiple layers of security are usually used to protect vital systems from compromise. Accordingly, attackers seeking to access protected systems must circumvent all of these layers. Resource- and budget-constrained defenders must choose when to execute actions such as patching, monitoring and cleaning infected systems in order to best protect their networks. Similarly, attackers must also decide when to attempt to penetrate a system and which exploit to use when doing so. We present an approach to modelling computer networks and vulnerabilities that can be used to find the optimal allocation of time to different system defence tasks. The vulnerabilities, state of the system and actions by the attacker and defender are used to build partially observable stochastic games. These games capture the uncertainty about the current state of the system and the uncertainty about the future. The solution to these games is a policy, which indicates the optimal actions to take for a given belief about the current state of the system. We demonstrate this approach using several different network configurations and types of player. We consider a trade-off for the system administrator, where they must allocate their time to performing either security-related tasks or performing other required non-security tasks. The results presented highlight that, with the requirement for other tasks to be performed, following the optimal policy means spending time on only the most essential security-related tasks, while the majority of time is spent on non-security tasks.
decision and game theory for security | 2016
Tristan Caulfield; Christos Ioannidis; David J. Pym
We propose a model, based on the work of Brock and Durlauf, which looks at how agents make choices between competing technologies, as a framework for exploring aspects of the economics of the adoption of privacy-enhancing technologies. In order to formulate a model of decision-making among choices of technologies by these agents, we consider the following: context, the setting in which and the purpose for which a given technology is used; requirement, the level of privacy that the technology must provide for an agent to be willing to use the technology in a given context; belief, an agents perception of the level of privacy provided by a given technology in a given context; and the relative value of privacy, how much an agent cares about privacy in this context and how willing an agent is to trade off privacy for other attributes. We introduce these concepts into the model, admitting heterogeneity among agents in order to capture variations in requirement, belief, and relative value in the population. We illustrate the model with two examples: the possible effects on the adoption of iOS devices being caused by the recent Apple---FBI case; and the recent revelations about the non-deletion of images on the adoption of Snapchat.
internet measurement conference | 2017
Savvas Zannettou; Tristan Caulfield; Emiliano De Cristofaro; Nicolas Kourtelris; Ilias Leontiadis; Michael Sirivianos; Gianluca Stringhini; Jeremy Blackburn
simulation tools and techniques for communications, networks and system | 2015
Tristan Caulfield; David J. Pym
international conference on human-computer interaction | 2014
Tristan Caulfield; David J. Pym; Julian M. Williams
Journal of Econometrics | 2014
David Blake; Tristan Caulfield; Christos Ioannidis; Ian Tonks