Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Moyer is active.

Publication


Featured researches published by Thomas Moyer.


IEEE Transactions on Computers | 2012

Scalable Web Content Attestation

Thomas Moyer; Kevin R. B. Butler; Joshua Schiffman; Patrick D. McDaniel; Trent Jaeger

The web is a primary means of information sharing for most organizations and people. Currently, a recipient of web content knows nothing about the environment in which that information was generated other than the specific server from whence it came (and even that information can be unreliable). In this paper, we develop and evaluate the Spork system that uses the Trusted Platform Module (TPM) to tie the web server integrity state to the web content delivered to browsers, thus allowing a client to verify that the origin of the content was functioning properly when the received content was generated and/or delivered. We discuss the design and implementation of the Spork service and its browser-side Firefox validation extension. In particular, we explore the challenges and solutions of scaling the delivery of mixed static and dynamic content to a large number of clients using exceptionally slow TPM hardware. We perform an in-depth empirical analysis of the Spork system within Apache web servers. This analysis shows Spork can deliver nearly 8,000 static or over 6,500 dynamic integrity-measured web objects per second. More broadly, we identify how TPM-based content web services can scale to large client loads with manageable overheads and deliver integrity-measured content with manageable overhead.


international world wide web conferences | 2017

Transparent Web Service Auditing via Network Provenance Functions

Adam M. Bates; Wajih Ul Hassan; Kevin R. B. Butler; Alin Dobra; Bradley Reaves; Patrick T. Cable; Thomas Moyer; Nabil Schear

Detecting and explaining the nature of attacks in distributed web services is often difficult -- determining the nature of suspicious activity requires following the trail of an attacker through a chain of heterogeneous software components including load balancers, proxies, worker nodes, and storage services. Unfortunately, existing forensic solutions cannot provide the necessary context to link events across complex workflows, particularly in instances where application layer semantics (e.g., SQL queries, RPCs) are needed to understand the attack. In this work, we present a transparent provenance-based approach for auditing web services through the introduction of Network Provenance Functions (NPFs). NPFs are a distributed architecture for capturing detailed data provenance for web service components, leveraging the key insight that mediation of an applications protocols can be used to infer its activities without requiring invasive instrumentation or developer cooperation. We design and implement NPF with consideration for the complexity of modern cloud-based web services, and evaluate our architecture against a variety of applications including DVDStore, RUBiS, and WikiBench to show that our system imposes as little as 9.3% average end-to-end overhead on connections for realistic workloads. Finally, we consider several scenarios in which our system can be used to concisely explain attacks. NPF thus enables the hassle-free deployment of semantically rich provenance-based auditing for complex applications workflows in the Cloud.


ieee high performance extreme computing conference | 2016

High-throughput ingest of data provenance records into Accumulo

Thomas Moyer; Vijay Gadepally

Whole-system data provenance provides deep insight into the processing of data on a system, including detecting data integrity attacks. The downside to systems that collect whole-system data provenance is the sheer volume of data that is generated under many heavy workloads. In order to make provenance metadata useful, it must be stored somewhere where it can be queried. This problem becomes even more challenging when considering a network of provenance-aware machines all collecting this metadata. In this paper, we investigate the use of D4M and Accumulo to support high-throughput data ingest of whole-system provenance data. We find that we are able to ingest 3,970 graph components per second. Centrally storing the provenance metadata allows us to build systems that can detect and respond to data integrity attacks that are captured by the provenance system.


annual computer security applications conference | 2016

Bootstrapping and maintaining trust in the cloud

Nabil Schear; Patrick T. Cable; Thomas Moyer; Bryan Richard; Robert Rudd

Todays infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware like the TPM have long promised a solution to this problem. However, these technologies have not seen broad adoption because of their complexity of implementation, low performance, and lack of compatibility with virtualized environments. In this paper we introduce keylime, a scalable trusted cloud key management system. keylime provides an end-to-end solution for both bootstrapping hardware rooted cryptographic identities for IaaS nodes and for system integrity monitoring of those nodes via periodic attestation. We support these functions in both bare-metal and virtualized IaaS environments using a virtual TPM. keylime provides a clean interface that allows higher level security services like disk encryption or configuration management to leverage trusted computing without being trusted computing aware. We show that our bootstrapping protocol can derive a key in less than two seconds, we can detect system integrity violations in as little as 110ms, and that keylime can scale to thousands of IaaS cloud nodes.


symposium on cloud computing | 2017

Practical whole-system provenance capture

Thomas F. J.-M. Pasquier; Xueyuan Michael Han; Mark A. Goldstein; Thomas Moyer; David M. Eyers; Margo I. Seltzer; Jean Bacon

Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a systems behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.


ACM Transactions on Internet Technology | 2017

Taming the Costs of Trustworthy Provenance through Policy Reduction

Adam M. Bates; Dave Tian; Grant Hernandez; Thomas Moyer; Kevin R. B. Butler; Trent Jaeger

Provenance is an increasingly important tool for understanding and even actively preventing system intrusion, but the excessive storage burden imposed by automatic provenance collection threatens to undermine its value in practice. This situation is made worse by the fact that the majority of this metadata is unlikely to be of interest to an administrator, instead describing system noise or other background activities that are not germane to the forensic investigation. To date, storing data provenance in perpetuity was a necessary concession in even the most advanced provenance tracking systems in order to ensure the completeness of the provenance record for future analyses. In this work, we overcome this obstacle by proposing a policy-based approach to provenance filtering, leveraging the confinement properties provided by Mandatory Access Control (MAC) systems in order to identify and isolate subdomains of system activity for which to collect provenance. We introduce the notion of minimal completeness for provenance graphs, and design and implement a system that provides this property by exclusively collecting provenance for the trusted computing base of a target application. In evaluation, we discover that, while the efficacy of our approach is domain dependent, storage costs can be reduced by as much as 89% in critical scenarios such as provenance tracking in cloud computing data centers. To the best of our knowledge, this is the first policy-based provenance monitor to appear in the literature.


computer and communications security | 2018

Runtime Analysis of Whole-System Provenance

Thomas F. J.-M. Pasquier; Xueyuan Han; Thomas Moyer; Adam M. Bates; Olivier Hermant; David M. Eyers; Jean Bacon; Margo I. Seltzer

Identifying the root cause and impact of a system intrusion remains a foundational challenge in computer security. Digital provenance provides a detailed history of the flow of information within a computing system, connecting suspicious events to their root causes. Although existing provenance-based auditing techniques provide value in forensic analysis, they assume that such analysis takes place only retrospectively. Such post-hoc analysis is insufficient for realtime security applications; moreover, even for forensic tasks, prior provenance collection systems exhibited poor performance and scalability, jeopardizing the timeliness of query responses. We present CamQuery, which provides inline, realtime provenance analysis, making it suitable for implementing security applications. CamQuery is a Linux Security Module that offers support for both userspace and in-kernel execution of analysis applications. We demonstrate the applicability of CamQuery to a variety of runtime security applications including data loss prevention, intrusion detection, and regulatory compliance. In evaluation, we demonstrate that CamQuery reduces the latency of realtime query mechanisms, while imposing minimal overheads on system execution. CamQuery thus enables the further deployment of provenance-based technologies to address central challenges in computer security.


2016 IEEE Cybersecurity Development (SecDev) | 2016

Leveraging Data Provenance to Enhance Cyber Resilience

Thomas Moyer; Karishma Chadha; Robert K. Cunningham; Nabil Schear; Warren Smith; Adam M. Bates; Kevin R. B. Butler; Frank Capobianco; Trent Jaeger; Patrick T. Cable

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Todays systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the hard-shell defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to fight-through an attack, and we help to identify necessary work to ensure that future systems are resilient.


usenix security symposium | 2015

Trustworthy whole-system provenance for the Linux kernel

Adam M. Bates; Dave Tian; Kevin R. B. Butler; Thomas Moyer


TaPP'15 Proceedings of the 7th USENIX Conference on Theory and Practice of Provenance | 2015

Take only what you need: leveraging mandatory access control policy to reduce provenance storage costs

Adam M. Bates; Kevin R. B. Butler; Thomas Moyer

Collaboration


Dive into the Thomas Moyer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nabil Schear

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick T. Cable

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Trent Jaeger

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Dave Tian

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert K. Cunningham

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vijay Gadepally

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Warren Smith

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean Bacon

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge