Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick D. McDaniel is active.

Publication


Featured researches published by Patrick D. McDaniel.


computer and communications security | 2009

On lightweight mobile phone application certification

William Enck; Machigar Ongtang; Patrick D. McDaniel

Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware.


programming language design and implementation | 2014

FlowDroid: precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for Android apps

Steven Arzt; Siegfried Rasthofer; Christian Fritz; Eric Bodden; Alexandre Bartel; Jacques Klein; Yves Le Traon; Damien Octeau; Patrick D. McDaniel

Todays smartphones are a ubiquitous source of private and confidential data. At the same time, smartphone users are plagued by carelessly programmed apps that leak important data by accident, and by malicious apps that exploit their given privileges to copy such data intentionally. While existing static taint-analysis approaches have the potential of detecting such data leaks ahead of time, all approaches for Android use a number of coarse-grain approximations that can yield high numbers of missed leaks and false alarms. In this work we thus present FlowDroid, a novel and highly precise static taint analysis for Android applications. A precise model of Androids lifecycle allows the analysis to properly handle callbacks invoked by the Android framework, while context, flow, field and object-sensitivity allows the analysis to reduce the number of false alarms. Novel on-demand algorithms help FlowDroid maintain high efficiency and precision at the same time. We also propose DroidBench, an open test suite for evaluating the effectiveness and accuracy of taint-analysis tools specifically for Android apps. As we show through a set of experiments using SecuriBench Micro, DroidBench, and a set of well-known Android test applications, FlowDroid finds a very high fraction of data leaks while keeping the rate of false positives low. On DroidBench, FlowDroid achieves 93% recall and 86% precision, greatly outperforming the commercial tools IBM AppScan Source and Fortify SCA. FlowDroid successfully finds leaks in a subset of 500 apps from Google Play and about 1,000 malware apps from the VirusShare project.


ieee symposium on security and privacy | 2009

Understanding Android Security

William Enck; Machigar Ongtang; Patrick D. McDaniel

Googles Android platform is a widely anticipated open source operating system for mobile phones. This article describes Androids security model and attempts to unmask the complexity of secure application development. The authors conclude by identifying lessons and opportunities for future enhancements.


annual computer security applications conference | 2009

Semantically Rich Application-Centric Security in Android

Machigar Ongtang; Stephen E. McLaughlin; William Enck; Patrick D. McDaniel

Smartphones are now ubiquitous. However, the security requirements of these relatively new systems and the applications they support are still being understood. As a result, the security infrastructure available in current smartphone operating systems is largely underdeveloped. In this paper, we consider the security requirements of smartphone applications and augment the existing Android operating system with a framework to meet them. We present Secure Application INTeraction (Saint), a modified infrastructure that governs install-time permission assignment and their run-time use as dictated by application provider policy. An in-depth description of the semantics of application policy is presented. The architecture and technical detail of Saint is given, and areas for extension, optimization, and improvement explored. As we show through concrete example, Saint provides necessary utility for applications to assert and control the security decisions on the platform.


ACM Transactions on Computer Systems | 2014

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones

William Enck; Peter Gilbert; Seungyeop Han; Vasant Tendulkar; Byung-Gon Chun; Landon P. Cox; Jaeyeon Jung; Patrick D. McDaniel; Anmol Sheth

Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32p performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.


ieee european symposium on security and privacy | 2016

The Limitations of Deep Learning in Adversarial Settings

Nicolas Papernot; Patrick D. McDaniel; Somesh Jha; Matthew Fredrikson; Z. Berkay Celik; Ananthram Swami

Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.


ieee symposium on security and privacy | 2016

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

Nicolas Papernot; Patrick D. McDaniel; Xi Wu; Somesh Jha; Ananthram Swami

Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 1030. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested.


computer and communications security | 2009

On cellular botnets: measuring the impact of malicious devices on a cellular network core

Patrick Traynor; Michael Lin; Machigar Ongtang; Vikhyath Rao; Trent Jaeger; Patrick D. McDaniel; Thomas F. La Porta

The vast expansion of interconnectivity with the Internet and the rapid evolution of highly-capable but largely insecure mobile devices threatens cellular networks. In this paper, we characterize the impact of the large scale compromise and coordination of mobile phones in attacks against the core of these networks. Through a combination of measurement, simulation and analysis, we demonstrate the ability of a botnet composed of as few as 11,750 compromised mobile phones to degrade service to area-code sized regions by 93%. As such attacks are accomplished through the execution of network service requests and not a constant stream of phone calls, users are unlikely to be aware of their occurrence. We then investigate a number of significant network bottlenecks, their impact on the density of compromised nodes per base station and how they can be avoided. We conclude by discussing a number of countermeasures that may help to partially mitigate the threats posed by such attacks.


computer and communications security | 2011

Protecting consumer privacy from electric load monitoring

Stephen E. McLaughlin; Patrick D. McDaniel; William Aiello

The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residences aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.


computer and communications security | 2017

Practical Black-Box Attacks against Machine Learning

Nicolas Papernot; Patrick D. McDaniel; Ian J. Goodfellow; Somesh Jha; Z. Berkay Celik; Ananthram Swami

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.

Collaboration


Dive into the Patrick D. McDaniel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Papernot

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

William Enck

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Trent Jaeger

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Stephen E. McLaughlin

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Z. Berkay Celik

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Thomas F. La Porta

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Damien Octeau

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge