Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan Heartfield is active.

Publication


Featured researches published by Ryan Heartfield.


ACM Computing Surveys | 2016

A Taxonomy of Attacks and a Survey of Defence Mechanisms for Semantic Social Engineering Attacks

Ryan Heartfield; George Loukas

Social engineering is used as an umbrella term for a broad spectrum of computer exploitations that employ a variety of attack vectors and strategies to psychologically manipulate a user. Semantic attacks are the specific type of social engineering attacks that bypass technical defences by actively manipulating object characteristics, such as platform or system applications, to deceive rather than directly attack the user. Commonly observed examples include obfuscated URLs, phishing emails, drive-by downloads, spoofed websites and scareware to name a few. This article presents a taxonomy of semantic attacks, as well as a survey of applicable defences. By contrasting the threat landscape and the associated mitigation techniques in a single comparative matrix, we identify the areas where further research can be particularly beneficial.


international symposium on computer and information sciences | 2013

On the Feasibility of Automated Semantic Attacks in the Cloud

Ryan Heartfield; George Loukas

While existing security mechanisms often work well against most known attack types, they are typically incapable of addressing semantic attacks. Such attacks bypass technical protection systems by exploiting the emotional response of the users in unusual technical configurations rather than by focussing on specific technical vulnerabilities. We show that semantic attacks can easily be performed in a cloud environment, where applications that would traditionally be run locally may now require interaction with an online system shared by several users. We illustrate the feasibility of an automated semantic attack in a popular cloud storage environment, evaluate its impact and provide recommendations for defending against such attacks.


IEEE Access | 2018

Cloud-Based Cyber-Physical Intrusion Detection for Vehicles Using Deep Learning

George Loukas; Tuan Vuong; Ryan Heartfield; Georgia Sakellari; Yongpil Yoon; Diane Gan

Detection of cyber attacks against vehicles is of growing interest. As vehicles typically afford limited processing resources, proposed solutions are rule-based or lightweight machine learning techniques. We argue that this limitation can be lifted with computational offloading commonly used for resource-constrained mobile devices. The increased processing resources available in this manner allow access to more advanced techniques. Using as case study a small four-wheel robotic land vehicle, we demonstrate the practicality and benefits of offloading the continuous task of intrusion detection that is based on deep learning. This approach achieves high accuracy much more consistently than with standard machine learning techniques and is not limited to a single type of attack or the in-vehicle CAN bus as previous work. As input, it uses data captured in real-time that relate to both cyber and physical processes, which it feeds as time series data to a neural network architecture. We use both a deep multilayer perceptron and recurrent neural network architecture, with the latter benefitting from a long-short term memory hidden layer, which proves very useful for learning the temporal context of different attacks. We employ denial of service, command injection and malware as examples of cyber attacks that are meaningful for a robotic vehicle. The practicality of computation offloading depends on the resources afforded onboard and remotely, and the reliability of the communication means between them. Using detection latency as the criterion, we have developed a mathematical model to determine when computation offloading is beneficial given parameters related to the operation of the network and the processing demands of the deep learning model. The more reliable the network and the greater the processing demands, the greater the reduction in detection latency achieved through offloading.


2016 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (CyberSA) | 2016

Evaluating the reliability of users as human sensors of social media security threats

Ryan Heartfield; George Loukas

While the human as a sensor concept has been utilised extensively for the detection of threats to safety and security in physical space, especially in emergency response and crime reporting, the concept is largely unexplored in the area of cyber security. Here, we evaluate the potential of utilising users as human sensors for the detection of cyber threats, specifically on social media. For this, we have conducted an online test and accompanying questionnaire-based survey, which was taken by 4,457 users. The test included eight realistic social media scenarios (four attack and four non-attack) in the form of screenshots, which the participants were asked to categorise as “likely attack” or “likely not attack”. We present the overall performance of human sensors in our experiment for each exhibit, and also apply logistic regression to evaluate the feasibility of predicting that performance based on different characteristics of the participants. Such prediction would be useful where accuracy of human sensors in detecting and reporting social media security threats is important. We identify features that are good predictors of a human sensors performance and evaluate them in both a theoretical ideal case and two more realistic cases, the latter corresponding to limited access to a users characteristics.


Computers & Security | 2018

Detecting semantic social engineering attacks with the weakest link: Implementation and empirical evaluation of a human-as-a-security-sensor framework

Ryan Heartfield; George Loukas

Abstract The notion that the human user is the weakest link in information security has been strongly, and, we argue, rightly contested in recent years. Here, we take a step further showing that the human user can in fact be the strongest link for detecting attacks that involve deception, such as application masquerading, spearphishing, WiFi evil twin and other types of semantic social engineering. Towards this direction, we have developed a human-as-a-security-sensor framework and a practical implementation in the form of Cogni-Sense, a Microsoft Windows prototype application, designed to allow and encourage users to actively detect and report semantic social engineering attacks against them. Experimental evaluation with 26 users of different profiles running Cogni-Sense on their personal computers for a period of 45 days has shown that human sensors can consistently outperform technical security systems. Making use of a machine learning based approach, we also show that the reliability of each report, and consequently the performance of each human sensor, can be predicted in a meaningful and practical manner. In an organisation that employs a human-as-a-security-sensor implementation, such as Cogni-Sense, an attack is considered to have been detected if at least one user has reported it. In our evaluation, a small organisation consisting only of the 26 participants of the experiment would have exhibited a missed detection rate below 10%, down from 81% if only technical security systems had been used. The results strongly point towards the need to actively involve the user not only in prevention through cyber hygiene and user-centric security design, but also in active cyber threat detection and reporting.


software engineering research and applications | 2017

An eye for deception: A case study in utilizing the human-as-a-security-sensor paradigm to detect zero-day semantic social engineering attacks

Ryan Heartfield; George Loukas; Diane Gan

In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.


Simulation Modelling Practice and Theory | 2017

Computation offloading of a vehicle’s continuous intrusion detection workload for energy efficiency and performance

George Loukas; Yongpil Yoon; Georgia Sakellari; Tuan Vuong; Ryan Heartfield

Computation offloading has been used and studied extensively in relation to mobile devices. That is because their relatively limited processing power and reliance on a battery render the concept of offloading any processing/energy-hungry tasks to a remote server, cloudlet or cloud infrastructure particularly attractive. However, the mobile device’s tasks that are typically offloaded are not time-critical and tend to be one-off. We argue that the concept can be practical also for continuous tasks run on more powerful cyber-physical systems where timeliness is a priority. As case study, we use the process of real-time intrusion detection on a robotic vehicle. Typically, such detection would employ lightweight statistical learning techniques that can run onboard the vehicle without severely affecting its energy consumption. We show that by offloading this task to a remote server, we can utilse approaches of much greater complexity and detection strength based on deep learning. We show both mathematically and experimentally that this allows not only greater detection accuracy, but also significant energy savings, which improve the operational autonomy of the vehicle. In addition, the overall detection latency is reduced in most of our experiments. This can be very important for vehicles and other cyber-physical systems where cyber attacks can directly affect physical safety. In fact, in some cases, the reduction in detection latency thanks to offloading is not only beneficial but necessary. An example is when detection latency onboard the vehicle would be higher than the detection period, and as a result a detection run cannot complete before the next one is scheduled, increasingly delaying consecutive detection decisions. Offloading to a remote server is an effective and energy-efficient solution to this problem too.


software engineering research and applications | 2017

Assessing the cyber-trustworthiness of human-as-a-sensor reports from mobile devices

Syed Sadiqur Rahman; Ryan Heartfield; William Oliff; George Loukas; Avgoustinos Filippoupolitis

The Human-as-a-Sensor (HaaS) paradigm, where it is human users rather than automated sensor systems that detect and report events or incidents has gained considerable traction over the last decades, especially as Internet-connected smartphones have helped develop an information sharing culture in society. In the law enforcement and civil protection space, HaaS is typically used to harvest information that enhances situational awareness regarding physical hazards, crimes and evolving emergencies. The trustworthiness of this information is typically studied in relation to the trustworthiness of the human sensors. However, malicious modification, prevention or delay of reports can also be the result of cyber or cyber-physical security breaches affecting the mobile devices and network infrastructure used to deliver HaaS reports. Examples of these can be denial of service attacks, where the timely delivery of reports is important, and location spoofing attacks, where the accuracy of the location of an incident is important. The aim of this paper is to introduce this cyber-trustworthiness aspect in HaaS and propose a mechanism for scoring reports in terms of their cyber-trustworthiness based on features of the mobile device that are monitored in real-time. Our initial results show that this is a promising line of work that can enhance the reliability of HaaS.


Archive | 2018

Protection Against Semantic Social Engineering Attacks

Ryan Heartfield; George Loukas

Phishing, drive-by downloads, file and multimedia masquerading, domain typosquatting, malvertising and other semantic social engineering attacks aim to deceive the user rather than exploit a technical flaw to breach a system’s security. We start with a chronological overview to illustrate the growing prevalence of such attacks from their early inception 30 years ago, and identify key milestones and indicative trends which have established them as primary weapons of choice for hackers, cyber-criminals and state actors today. To demonstrate the scale and widespread nature of the threat space, we identify over 35 individually recognised types of semantic attack, existing within and cross-contaminating between a vast range of different computer platforms and user interfaces. Their extreme diversity and the little to no technical traces they leave make them particularly difficult to protect against. Technical protection systems typically focus on a single attack type on a single platform type rather than the wider landscape of deception-based attacks. To address this issue, we discuss three high-level defense approaches for preemptive and proactive protection, including adopting the semantic attack killchain concept which simplifies targeted defense; principles for preemptive and proactive protection for passive threats; and platform based defense-in-depth lifecycle designed to harness technical and non-technical defense capabilities of platform providers and their user base. Here, the human-as-a-security-sensor paradigm can prove particularly useful by leveraging the collective natural ability of users themselves in detecting deception attempts against them.


Computers & Security | 2018

A taxonomy of cyber-physical threats and impact in the smart home

Ryan Heartfield; George Loukas; Sanja Budimir; Anatolij Bezemskij; Johnny R. J. Fontaine; Avgoustinos Filippoupolitis; Etienne B. Roesch

In the past, home automation was a small market for technology enthusiasts. Interconnectivity between devices was down to the owner’s technical skills and creativity, while security was non-existent or primitive, because cyber threats were also largely non-existent or primitive. This is not the case any more. The adoption of Internet of Things technologies, cloud computing, artificial intelligence and an increasingly wide range of sensing and actuation capabilities has led to smart homes that are more practical, but also genuinely attractive targets for cyber attacks. Here, we classify applicable cyber threats according to a novel taxonomy, focusing not only on the attack vectors that can be used, but also the potential impact on the systems and ultimately on the occupants and their domestic life. Utilising the taxonomy, we classify twenty five different smart home attacks, providing further examples of legitimate, yet vulnerable smart home configurations which can lead to second-order attack vectors. We then review existing smart home defence mechanisms and discuss open research problems.

Collaboration


Dive into the Ryan Heartfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diane Gan

University of Greenwich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tuan Vuong

University of Greenwich

View shared research outputs
Top Co-Authors

Avatar

Yongpil Yoon

University of Greenwich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge