Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert K. Cunningham is active.

Publication


Featured researches published by Robert K. Cunningham.


workshop on rapid malcode | 2003

A taxonomy of computer worms

Nicholas Weaver; Vern Paxson; Stuart Staniford; Robert K. Cunningham

To understand the threat posed by computer worms, it is necessary to understand the classes of worms, the attackers who may employ them, and the potential payloads. This paper describes a preliminary taxonomy based on worm target discovery and selection strategies, worm carrier mechanisms, worm activation, possible payloads, and plausible attackers who would employ a worm.


darpa information survivability conference and exposition | 2000

Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation

Richard P. Lippmann; David J. Fried; Isaac Graf; Joshua W. Haines; Kristopher R. Kendall; David McClung; Dan Weber; Seth E. Webster; Dan Wyschogrod; Robert K. Cunningham; Marc A. Zissman

An intrusion detection evaluation test bed was developed which generated normal traffic similar to that on a government site containing 100s of users on 1000s of hosts. More than 300 instances of 38 different automated attacks were launched against victim UNIX hosts in seven weeks of training data and two weeks of test data. Six research groups participated in a blind evaluation and results were analyzed for probe, denial-of-service (DoS) remote-to-local (R2L), and user to root (U2R) attacks. The best systems detected old attacks included in the training data, at moderate detection rates ranging from 63% to 93% at a false alarm rate of 10 false alarms per day. Detection rates were much worse for new and novel R2L and DoS attacks included only in the test data. The best systems failed to detect roughly half these new attacks which included damaging access to root-level privileges by remote users. These results suggest that further research should focus on developing techniques to find new attacks instead of extending existing rule-based approaches.


Archive | 2002

Fusing A Heterogeneous Alert Stream Into Scenarios

Oliver Dain; Robert K. Cunningham

An algorithm for fusing the alerts produced by multiple heterogeneous intrusion detection systems is presented. The algorithm runs in real-time, combining the alerts into scenarios; each is composed of a sequence of alerts produced by a single actor or organization. The software is capable of discovering scenarios even if stealthy attack methods, such as forged IP addresses or long attack latencies, are employed. The algorithm generates scenarios by estimating the probability that a new alert belongs to a given scenario. Alerts are then added to the most likely candidate scenario. Two alternative probability estimation techniques are compared to an algorithm that builds scenarios using a set of rules. Both probability estimate approaches make use of training data to learn the appropriate probability measures. Our algorithm can determine the scenario membership of a new alert in time proportional to the number of candidate scenarios.


recent advances in intrusion detection | 2000

Improving intrusion detection performance using keyword selection and neural networks

Richard P. Lippmann; Robert K. Cunningham

Abstract The most common computer intrusion detection systems detect signatures of known attacks by searching for attack-specific keywords in network traffic. Many of these systems suffer from high false-alarm rates (often hundreds of false alarms per day) and poor detection of new attacks. Poor performance can be improved using a combination of discriminative training and generic keywords. Generic keywords are selected to detect attack preparations, the actual break-in, and actions after the break-in. Discriminative training weights keyword counts to discriminate between the few attack sessions where keywords are known to occur and the many normal sessions where keywords may occur in other contexts. This approach was used to improve the baseline keyword intrusion detection system used to detect user-to-root attacks in the 1998 DARPA Intrusion Detection Evaluation. It reduced the false-alarm rate required to obtain 80% correct detections by two orders of magnitude to roughly one false alarm per day. The improved keyword system detects new as well as old attacks in this database and has roughly the same computation requirements as the original baseline system. Both generic keywords and discriminant training were required to obtain this large performance improvement.


military communications conference | 2006

Validating and Restoring Defense in Depth Using Attack Graphs

Richard P. Lippmann; Kyle Ingols; Chris E. Scott; Keith Piwowarski; Kendra Kratkiewicz; Mike Artz; Robert K. Cunningham

Defense in depth is a common strategy that uses layers of firewalls to protect supervisory control and data acquisition (SCADA) subnets and other critical resources on enterprise networks. A tool named NetSPA is presented that analyzes firewall rules and vulnerabilities to construct attack graphs. These show how inside and outside attackers can progress by successively compromising exposed vulnerable hosts with the goal of reaching critical internal targets. NetSPA generates attack graphs and automatically analyzes them to produce a small set of prioritized recommendations to restore defense in depth. Field trials on networks with up to 3,400 hosts demonstrate that firewalls often do not provide defense in depth due to misconfigurations and critical unpatched vulnerabilities on hosts. In all cases, a small number of recommendations was provided to restore defense in depth. Simulations on networks with up to 50,000 hosts demonstrate that this approach scales well to enterprise-size networks


workshop on rapid malcode | 2003

Detection of injected, dynamically generated, and obfuscated malicious code

Jesse C. Rabek; Roger I. Khazan; Scott M. Lewandowski; Robert K. Cunningham

This paper presents DOME, a host-based technique for detecting several general classes of malicious code in software executables. DOME uses static analysis to identify the locations (virtual addresses) of system calls within the software executables, and then monitors the executables at runtime to verify that every observed system call is made from a location identified using static analysis. The power of this technique is that it is simple, practical, applicable to real-world software, and highly effective against injected, dynamically generated, and obfuscated malicious code.


ieee aerospace conference | 2002

LARIAT: Lincoln adaptable real-time information assurance testbed

Lee M. Rossey; Robert K. Cunningham; David J. Fried; Jesse C. Rabek; Richard P. Lippmann; Joshua W. Haines; Marc A. Zissman

The Lincoln adaptable real-time information assurance testbed, LARIAT, is an extension of the testbed created for DARPA 1998 and 1999 intrusion detection (ID) evaluations. LARIAT supports real-time, automated and quantitative evaluations of ID systems and other information assurance (IA) technologies. Components of LARIAT generate realistic background user traffic and real network attacks, verify attack success or failure, score ID system performance, and provide a graphical user interface for control and monitoring. Emphasis was placed on making LARIAT easy to adapt, configure and run without requiring a detailed understanding of the underlying complexity. LARIAT is currently being exercised at four sites and is undergoing continued development and refinement.


Neural Networks | 1995

Neural processing of targets in visible, multispectral IR and SAR imagery

Allen M. Waxman; Michael Seibert; Alan N. Gove; David A. Fay; Ann Marie Bernardon; Carol H. Lazott; William R. Steele; Robert K. Cunningham

Abstract We have designed and implemented computational neural systems for target enhancement, detection, learning and recognition in visible, multispectral infrared (IR), and synthetic aperture radar (SAR) imagery. The system architectures are motivated by designs of biological vision systems, drawing insights from retinal processing of contrast and color, occipital lobe processing of shading, color and contour, and temporal lobe processing of pattern and shape. Distinguishing among similar targets, and accumulation of evidence across image sequences is also described. Similar neurocomputational principles and modules are used across these various sensing domains. We show how 3D target learning and recognition from visible silhouettes and SAR return patterns are related. We show how models of contrast enhancement, contour, shading and color vision can be used to enhance targets in multispectral IR and SAR imagery, aiding in target detection.


ieee high performance extreme computing conference | 2014

Computing on masked data: a high performance method for improving big data veracity

Jeremy Kepner; Vijay Gadepally; Peter Michaleas; Nabil Schear; Mayank Varia; Arkady Yerukhimovich; Robert K. Cunningham

The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Along with these standard three Vs of big data, an emerging fourth “V” is veracity, which addresses the confidentiality, integrity, and availability of the data. Traditional cryptographic techniques that ensure the veracity of data can have overheads that are too large to apply to big data. This work introduces a new technique called Computing on Masked Data (CMD), which improves data veracity by allowing computations to be performed directly on masked data and ensuring that only authorized recipients can unmask the data. Using the sparse linear algebra of associative arrays, CMD can be performed with significantly less overhead than other approaches while still supporting a wide range of linear algebraic operations on the masked data. Databases with strong support of sparse operations, such as SciDB or Apache Accumulo, are ideally suited to this technique. Examples are shown for the application of CMD to a complex DNA matching algorithm and to database operations over social media data.


recent advances in intrusion detection | 2010

Generating client workloads and high-fidelity network traffic for controllable, repeatable experiments in computer security

Charles V. Wright; Christopher Connelly; Timothy Braje; Jesse C. Rabek; Lee M. Rossey; Robert K. Cunningham

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing with client-side threats and defenses, where often user input is required as part of the experiment. In this paper, we present techniques for making experiments involving security and client-side desktop applications like web browsers, PDF readers, or host-based firewalls or intrusion detection systems more controllable and more easily repeatable. First, we present techniques for using statistical models of user behavior to drive real, binary, GUI-enabled application programs in place of a human user. Second, we present techniques based on adaptive replay of application dialog that allow us to quickly and efficiently reproduce reasonable mock-ups of remotely-hosted applications to give the illusion of Internet connectedness on an isolated testbed. We demonstrate the utility of these techniques in an example experiment comparing the system resource consumption of a Windows machine running anti-virus protection versus an unprotected system.

Collaboration


Dive into the Robert K. Cunningham's collaboration.

Top Co-Authors

Avatar

Marc A. Zissman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard P. Lippmann

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David J. Fried

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Seth E. Webster

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arkady Yerukhimovich

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Benjamin Fuller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Isaac Graf

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Oliver Dain

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Allen M. Waxman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Wyschogrod

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge