Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michele Co is active.

Publication


Featured researches published by Michele Co.


engineering secure software and systems | 2009

MEDS: The Memory Error Detection System

Jason D. Hiser; Clark L. Coleman; Michele Co; Jack W. Davidson

Memory errors continue to be a major source of software failure. To address this issue, we present MEDS (Memory Error Detection System), a system for detecting memory errors within binary executables. The system can detect buffer overflow, uninitialized data reads, double-free, and deallocated memory access errors and vulnerabilities. It works by using static analysis to prove memory accesses safe. If a memory access cannot be proven safe, MEDS falls back to run-time analysis. The system exceeds previous work with dramatic reductions in false positives, as well as covering all memory segments (stack, static, heap).


runtime verification | 2012

Defense against Stack-Based Attacks Using Speculative Stack Layout Transformation

Benjamin D. Rodes; Anh Nguyen-Tuong; Jason D. Hiser; John C. Knight; Michele Co; Jack W. Davidson

This paper describes a novel technique to defend binaries against intra-frame stack-based attacks, including overflows into local variables, when source code is unavailable. The technique infers a specification of a function’s stack layout, i.e., variable locations and boundaries, and then seeks to apply a combination of transformations, including variable reordering, random-sized padding between variables, and placement of canaries. To overcome the imprecision of static binary analysis, yet be as aggressive as possible in the transformations applied to the stack layout, the technique is speculative. A stack frame is aggressively transformed based on static analysis, and the validity of inferred stack layout is assessed through regression testing. If a transformation changes a program’s semantics because of imprecision in the inference of the stack layout, a less aggressive layout is inferred until the transformed program passes the supplied regression tests. We present an overview of the technique and preliminary results of its feasibility and security effectiveness.


international conference on software engineering | 2011

PEASOUP: preventing exploits against software of uncertain provenance (position paper)

Michele Co; Jack W. Davidson; Jason D. Hiser; John C. Knight; Anh Nguyen-Tuong; David R. Cok; Denis Gopan; David Melski; Wenke Lee; Chengyu Song; Thomas Bracewell; David Hyde; Brian Mastropietro

Because software provides much of the critical services for modern society, it is vitally important to provide methodologies and tools for building and deploying reliable software. While there have been many advances towards this goal, much research remains to be done. For example, a recent evaluation of five state-of-the-art C/C++ static analysis tools applied to a corpus of code containing common weaknesses revealed that 41% of the potential vulnerabilities were detected by no tool. The problem of deploying resilient software is further complicated because modern software is often assembled from components from many sources. Consequently, it is difficult to know who built a particular component and what processes were used in its construction. Our research goal is to develop and demonstrate technology that provides comprehensive, automated techniques that allow end users to safely execute new software of uncertain provenance. This paper presents an overview of our vision for realizing these goals and outlines some of the challenging research problems that must be addressed to realize our vision. We call our vision PEASOUP and have begun implementing and evaluating these ideas.


ACM Transactions on Architecture and Code Optimization | 2006

Evaluating trace cache energy efficiency

Michele Co; Dee A. B. Weikle; Kevin Skadron

Future fetch engines need to be energy efficient. Much research has focused on improving fetch bandwidth. In particular, previous research shows that storing concatenated basic blocks to form instruction traces can significantly improve fetch performance. This work evaluates whether this concatenating of basic blocks translates to significant energy-efficiency gains. We compare processor performance and energy efficiency in trace caches compared to instruction caches. We find that, although trace caches modestly outperform instruction cache only alternatives, it is branch-prediction accuracy that really determines performance and energy efficiency. When access delay and area restrictions are considered, our results show that sequential trace caches achieve very similar performance and energy efficiency results compared to instruction cache-based fetch engines and show that the trace caches failure to significantly outperform the instruction cache-based fetch organizations stems from the poorer implicit branch prediction from the next-trace predictor at smaller areas. Because access delay limits the theoretical performance of the evaluated fetch engines, we also propose a novel ahead-pipelined next-trace predictor. Our results show that an STC fetch organization with a three-stage, ahead-pipelined next-trace predictor can achieve 5--17% IPC and 29% ED2 improvements over conventional, unpipelined organizations.


european dependable computing conference | 2014

To B or not to B: Blessing OS Commands with Software DNA Shotgun Sequencing

Anh Nguyen-Tuong; Jason D. Hiser; Michele Co; Nathan Taylor Kennedy; David Melski; William Ella; David Hyde; Jack W. Davidson; John C. Knight

We introduce Software DNA Shotgun Sequencing (S3), a novel, biologically-inspired approach to combat OS Injection Attacks, the #2 most dangerous software error as identified by MITRE. To thwart such attacks, researchers have advocated various forms of taint-tracking techniques. Despite promising results, e.g., few missed attacks and few false alarms, taint-tracking has not seen widespread adoption. Impediments to adoption include high overhead and difficulty of deployment. S3 is based on a novel technique: positive taint inference which dynamically reassembles string fragments from a binary to infer blessed, i.e. trusted, parts of an OS command. S3 incurs negligible performance overhead and is easy to deploy as it operates directly on binary programs.


2009 2nd International Symposium on Resilient Control Systems | 2009

A lightweight software control system for cyber awareness and security

Michele Co; Clark L. Coleman; Jack W. Davidson; Sudeep Ghosh; Jason D. Hiser; John C. Knight; Anh Nguyen-Tuong

Designing and building software that is free of defects that can be exploited by malicious adversaries is a difficult task. Despite extensive efforts via the application of formal methods, use of automated software engineering tools, and performing extensive pre-deployment testing, exploitable errors still appear in software. The problem of cyber resilience is further compounded by the growing sophistication of adversaries who can marshal substantial resources to compromise systems. This paper describes a novel, promising approach to improving the resilience of software. The approach is to impose a process-level software control system that continuously monitors an application for signs of attack or failure and responds accordingly. The system uses software dynamic translation to seamlessly insert arbitrary sensors and actuators into an executing binary. The control system employs the sensors to detect attacks and the actuators to effect an appropriate response. Using this approach, several novel monitoring and response systems have been developed. The paper describes our light-weight process-level software control system, our experience using it to increase the resilience of systems, and discusses future research directions for extending and enhancing this powerful approach to achieving cyber awareness and resilience.


dependable systems and networks | 2017

Zipr: Efficient Static Binary Rewriting for Security

William H. Hawkins; Jason D. Hiser; Michele Co; Anh Nguyen-Tuong; Jack W. Davidson

To quickly patch security vulnerabilities there has been keen interest in securing binaries in situ. Unfortunately, the state of the art in static binary rewriting does not allow the transformed program to be both space and time efficient. A primary limitation is that leading static rewriters require that the original copy of the code remains in the transformed binary, thereby incurring file size overhead of at least 100%. This paper presents Zipr, a static binary rewriter that removes this limitation and enables both space and time efficient transformation of arbitrary binaries. We describe results from applying Zipr in the DARPA Cyber Grand Challenge (CGC), the first fully automated cyber-hacking contest. The CGC rules penalized competitors for producing a patched binary whose on-disk size was 20% larger than the original, whose CPU utilization was 5% more than the original, and whose memory use was 5% more than the original. Ziprs efficiency enabled our automated system, Xandra, to apply both code diversity and control flow integrity security techniques to secure challenge binaries provided by DARPA, resulting in Xandra having the best security score in the competition, remaining within the required space and time performance envelope, and winning a


Proceedings of the 11th Annual Cyber and Information Security Research Conference on | 2016

Double Helix and RAVEN: A System for Cyber Fault Tolerance and Recovery

Michele Co; Jack W. Davidson; Jason D. Hiser; John C. Knight; Anh Nguyen-Tuong; Westley Weimer; Jonathan Burket; Gregory L. Frazier; Tiffany M. Frazier; Bruno Dutertre; Ian A. Mason; Natarajan Shankar; Stephanie Forrest

1M cash prize.


IEEE Computer | 2016

Diversity in Cybersecurity

John C. Knight; Jack W. Davidson; Anh Nguyen-Tuong; Jason D. Hiser; Michele Co

Cyber security research has produced numerous artificial diversity techniques such as address space layout randomization, heap randomization, instruction-set randomization, and instruction location randomization. To be most effective, these techniques must be high entropy and secure from information leakage which, in practice, is often difficult to achieve. Indeed, it has been demonstrated that well-funded, determined adversaries can often circumvent these defenses. To allow use of low-entropy diversity, prevent information leakage, and provide provable security against attacks, previous research proposed using low-entropy but carefully structured artificial diversity to create variants of an application and then run these constructed variants within a fault-tolerant environment that runs each variant in parallel and cross check results to detect and mitigate faults. If the variants are carefully constructed, it is possible to prove that certain classes of attack are not possible. This paper presents an overview and status of a cyber fault tolerant system that uses a low overhead multi-variant execution environment and precise static binary analysis and efficient rewriting technology to produce structured variants which allow automated verification techniques to prove security properties of the system. Preliminary results are presented which demonstrate that the system is capable of detecting unknown faults and mitigating attacks.


european dependable computing conference | 2014

A Framework for Creating Binary Rewriting Tools (Short Paper)

Jason D. Hiser; Anh Nguyen-Tuong; Michele Co; Benjamin D. Rodes; Matthew Hall; Clark L. Coleman; John C. Knight; Jack W. Davidson

Randomizing software characteristics can help thwart cyberattacks by denying critical information about a target system previously known to an attacker.

Collaboration


Dive into the Michele Co's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge