Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Aspinall is active.

Publication


Featured researches published by David Aspinall.


international conference on software testing verification and validation workshops | 2015

Security testing for Android mHealth apps

Konstantin Knorr; David Aspinall

Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.


information security conference | 2015

On the Privacy, Security and Safety of Blood Pressure and Diabetes Apps

Konstantin Knorr; David Aspinall; Maria Wolters

Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions. In this paper, we examine whether mHealth apps succeed in ensuring the privacy, security, and safety of the health data entrusted to them. We investigate 154 apps from Android app stores using both automatic code and metadata analysis and a manual analysis of functionality and data leakage. Our study focuses on hypertension and diabetes, two common health conditions that require careful tracking of personal health data.


ieee international conference on pervasive computing and communications | 2015

Sensor use and usefulness: Trade-offs for data-driven authentication on mobile devices

Nicholas Micallef; Hilmi Gunes Kayacik; Mike Just; Lynne Baillie; David Aspinall

Modern mobile devices come with an array of sensors that support many interesting applications. However, sensors have different sampling costs (e.g., battery drain) and benefits (e.g., accuracy) under different circumstances. In this work we investigate the trade-off between the cost of using a sensor and the benefit gained from its use, with application to data-driven authentication on mobile devices. Current authentication practice, where user behaviour is first learned from the sensor data and then used to detect anomalies, typically assumes a fixed sampling rate and does not consider the battery consumption and usefulness of sensors. In this work we study how battery consumption and sensor effectiveness (e.g., for detecting attacks) vary when using different sensors and different sensor sampling rates. We use data from both controlled lab studies, as well as field trials, for our experiments. We also propose an adaptive sampling technique that adjusts the sampling rate based on an expected device vigilance level. Our results show that it is possible to reduce the battery consumption tenfold without significantly impacting the detection of attacks.


wireless network security | 2016

More Semantics More Robust: Improving Android Malware Classifiers

Wei Chen; David Aspinall; Andrew D. Gordon; Charles A. Sutton; Igor Muttik

Automatic malware classifiers often perform badly on the detection of new malware, i.e., their robustness is poor. We study the machine-learning-based mobile malware classifiers and reveal one reason: the input features used by these classifiers cant capture general behavioural patterns of malware instances. We extract the best-performing syntax-based features like permissions and API calls, and some semantics-based features like happen-befores and unwanted behaviours, and train classifiers using popular supervised and semi-supervised learning methods. By comparing their classification performance on industrial datasets collected across several years, we demonstrate that using semantics-based features can dramatically improve robustness of malware classifiers.


automated technology for verification and analysis | 2015

EviCheck: Digital Evidence for Android

Mohamed Nassim Seghir; David Aspinall

We present EviCheck, a tool for the verification, certification and generation of lightweight fine-grained security policies for Android. It applies static analysis to check the conformance between an application and a given policy. A distinguishing feature of EviCheck is its ability to generate digital evidence: a certificate for the analysis algorithm asserting the conformance between the application and the policy. This certificate can be independently checked by another component (tool) to validate or refute the result of the analysis. The checking process is generally very efficient compared to certificate generation as experiments on 20,000 real-world applications show.


interactive theorem proving | 2016

What’s in a Theorem Name?

David Aspinall; Cezary Kaliszyk

ITPs use names for proved theorems. Good names are either widely known or descriptive, corresponding to a theorem’s statement. Good names should be consistent with conventions, and be easy to remember. But thinking of names like this for every intermediate result is a burden: some developers avoid this by using consecutive integers or random hashes instead. We ask: is it possible to relieve the naming burden and automatically suggest sensible theorem names? We present a method to do this. It works by learning associations between existing theorem names in a large library and the names of defined objects and term patterns occurring in their corresponding statements.


integrated formal methods | 2016

On Robust Malware Classifiers by Verifying Unwanted Behaviours

Wei Chen; David Aspinall; Andrew D. Gordon; Charles A. Sutton; Igor Muttik

Machine-learning-based Android malware classifiers perform badly on the detection of new malware, in particular, when they take API calls and permissions as input features, which are the best performing features known so far. This is mainly because signature-based features are very sensitive to the training data and cannot capture general behaviours of identified malware. To improve the robustness of classifiers, we study the problem of learning and verifying unwanted behaviours abstracted as automata. They are common patterns shared by malware instances but rarely seen in benign applications, e.g., intercepting and forwarding incoming SMS messages. We show that by taking the verification results against unwanted behaviours as input features, the classification performance of detecting new malware is improved dramatically. In particular, the precision and recall are respectively 8 and 51 points better than those using API calls and permissions, measured against industrial datasets collected across several years. Our approach integrates several methods: formal methods, machine learning and text mining techniques. It is the first to automatically generate unwanted behaviours for Android malware detection. We also demonstrate unwanted behaviours constructed for well-known malware families. They compare well to those described in human-authored descriptions of these families.


european workshop on system security | 2016

A text-mining approach to explain unwanted behaviours

Wei Chen; David Aspinall; Andrew D. Gordon; Charles A. Sutton; Igor Muttik

Current machine-learning-based malware detection seldom provides information about why an app is considered bad. We study the automatic explanation of unwanted behaviours in mobile malware, e.g., sending premium SMS messages. Our approach combines machine learning and text mining techniques to produce explanations in natural language. It selects keywords from features used in malware classifiers, and presents the sentences chosen from human-authored malware analysis reports by using these keywords. The explanation elaborates how a system decision was made. As far as we know, this is the first attempt to generate explanations in natural language by mining the reports written by human malware analysts, resulting in a scalable and entirely data-driven method.


engineering secure software and systems | 2016

AppPAL for Android

Joseph Hallett; David Aspinall

It can be difficult to find mobile apps that respect ones security and privacy. Businesses rely on employees enforcing company mobile device policies correctly. Users must judge apps by the information shown to them by the store. Studies have found that most users do not pay attention to an apps permissions during installationi¾?[19] and most users do not understand how permissions relate to the capabilities of an appi¾?[30]. To address these problems and more, we present AppPAL: a machine-readable policy language for Android that describes precisely when apps are acceptable. AppPAL goes beyond existing policy enforcement tools, like Kirini¾?[16], adding delegation relationships to allow a variety of authorities to contribute to a decision. AppPAL also acts as a glue, allowing connection to a variety of local constraint checkers e.g., static analysis tools, packager manager checks to combine their results. As well as introducing AppPAL and some examples, we apply it to explore whether users follow certain intended policies in practice, finding privacy preferences and actual behaviour are not always aligned in the absence of a rigorous enforcement mechanism.


computer and communications security | 2016

POSTER: Weighing in eHealth Security

Martin Krämer; David Aspinall; Maria Wolters

eHealth devices such as smart scales and wearable fitness trackers are a key part of many health technology solutions. However, these eHealth devices can be vulnerable to privacy and security related attacks. In this poster, we propose a security analysis framework for eHealth devices, called mH-PriSe, that will yield useful information for security analysts, vendors, health care providers, and consumers. We demonstrate our framework by analysing scales from 6 vendors. Our results show that while vendors strive to address security and privacy issues correctly, challenges remain in many cases. Only 5 out of 8 solutions can be recommended with some caveats whereas the remaining 3 solutions expose severe vulnerabilities.

Collaboration


Dive into the David Aspinall's collaboration.

Top Co-Authors

Avatar

Wei Chen

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Konstantin Knorr

Trier University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge