Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John McHugh is active.

Publication


Featured researches published by John McHugh.


ACM Transactions on Information and System Security | 2000

Testing Intrusion detection systems: a critique of the 1998 and 1999 DARPA intrusion detection system evaluations as performed by Lincoln Laboratory

John McHugh

In 1998 and again in 1999, the Lincoln Laboratory of MIT conducted a comparative evaluation of intrusion detection systems (IDSs) developed under DARPA funding. While this evaluation represents a significant and monumental undertaking, there are a number of issues associated with its design and execution that remain unsettled. Some methodologies used in the evaluation are questionable and may have biased its results. One problem is that the evaluators have published relatively little concerning some of the more critical aspects of their work, such as validation of their test data. The appropriateness of the evaluation techniques used needs further investigation. The purpose of this article is to attempt to identify the shortcomings of the Lincoln Lab effort in the hope that future efforts of this kind will be placed on a sounder footing. Some of the problems that the article points out might well be resolved if the evaluators were to publish a detailed description of their procedures and the rationale that led to their adoption, but other problems would clearly remain./par>


IEEE Software | 2000

Defending Yourself: The Role of Intrusion Detection Systems

John McHugh; Alan M. Christie; Julia H. Allen

Intrusion detection systems are an important component of defensive measures protecting computer systems and networks from abuse. This article considers the role of IDSs in an organizations overall defensive posture and provides guidelines for IDS deployment, operation, and maintenance.


IEEE Computer | 2000

Windows of vulnerability: a case study analysis

William A. Arbaugh; William L. Fithen; John McHugh

The authors propose a life cycle model for system vulnerabilities, then apply it to three case studies to reveal how systems often remain vulnerable long after security fixes are available. For each case, we provide background information about the vulnerability, such as how attackers exploited it and which systems were affected. We then tie the case to the life-cycle model by identifying the dates for each state within the model. Finally, we use a histogram of reported intrusions to show the life of the vulnerability, and we conclude with an analysis specific to the particular vulnerability.


International Journal of Information Security | 2001

Intrusion and intrusion detection

John McHugh

Assurance technologies for computer security have failed to have significant impacts in the marketplace, with the result that most of the computers connected to the internet are vulnerable to attack. This paper looks at the problem of malicious users from both a historical and practical standpoint. It traces the history of intrusion and intrusion detection from the early 1970s to the present day, beginning with a historical overview. The paper describes the two primary intrusion detection techniques, anomaly detection and signature-based misuse detection, in some detail and describes a number of contemporary research and commercial intrusion detection systems. It ends with a brief discussion of the problems associated with evaluating intrusion detection systems and a discussion of the difficulties associated with making further progress in the field. With respect to the latter, it notes that, like many fields, intrusion detection has been based on a combination of intuition and brute-force techniques. We suspect that these have carried the field as far as they can and that further significant progress will depend on the development of an underlying theoretical basis for the field.


ieee symposium on security and privacy | 2001

A trend analysis of exploitations

Hilary K. Browne; William A. Arbaugh; John McHugh; William L. Fithen

We have conducted an empirical study of a number of computer security exploits and determined that the rates at which incidents involving the exploit are reported to CERT can be modeled using a common mathematical framework. Data associated with three significant exploits involving vulnerabilities in phf, imap, and bind can all be modeled using the formula C=I+S/spl times//spl radic/M where C is the cumulative count of reported incidents, M is the time since the start of the exploit cycle, and I and S are the regression coefficients determined by analysis of the incident report data. Further analysis of two additional exploits involving vulnerabilities in mountd and statd confirm the model. We believe that the models will aid in predicting the severity of subsequent vulnerability exploitations, based on the rate of early incident reports.


information hiding | 2002

Hiding Intrusions: From the Abnormal to the Normal and Beyond

Kymie M. C. Tan; John McHugh; Kevin S. Killourhy

Anomaly based intrusion detection has been held out as the best (perhaps only) hope for detecting previously unknown exploits. We examine two anomaly detectors based on the analysis of sequences of system calls and demonstrate that the general information hiding paradigm applies in this area also. Given even a fairly restrictive definition of normal behavior, we were able to devise versions of several exploits that escape detection. This is done in several ways: by modifying the exploit so that its manifestations match normal, by making a serious attack have the manifestations of a less serious but similar attack, and by making the attack look like an entirely different attack. We speculate that similar attacks are possible against other anomaly based IDS and that the results have implications for other areas of information hiding.


recent advances in intrusion detection | 2000

The 1998 Lincoln Laboratory IDS Evaluation

John McHugh

In 1998 (and again in 1999), the Lincoln Laboratory of MIT conducted a comparative evaluation of Intrusion Detection Systems developed under DARPA funding. While this evaluation represents a significant and monumental undertaking, there are a number of unresolved issues associated with its design and execution. Some of methodologies used in the evaluation are questionable and may have biased its results. One of the problems with the evaluation is that the evaluators have published relatively little concerning some of the more critical aspects of their work, such as validation of their test data. The purpose of this paper is to attempt to identify the shortcomings of the Lincoln Lab effort in the hope that future efforts of this kind will be placed on a sounder footing. Some of the problems that the paper points out might well be resolved if the evaluators publish a detailed description of their procedures and the rationale that led to their adoption, but other problems clearly remain.


new security paradigms workshop | 2003

Locality: a new paradigm for thinking about normal behavior and outsider threat

John McHugh; Carrie Gates

Locality as a unifying concept for understanding the normal behavior of benign users of computer systems is suggested as a unifying paradigm that will support the detection of malicious anomalous behaviors. The paper notes that locality appears in many dimensions and applies to such diverse mechanisms as the working set of IP addresses contacted during a web browsing session, the set of email addresses with which one customarily corresponds, the way in which pages are fetched from a web site. In every case intrusive behaviors that violate locality are known to exist and in some cases, the violation is necessary for the intrusive behavior to achieve its goal. If this observation holds up under further investigation, we will have a powerful way of thinking about security and intrusive activity.


ieee symposium on security and privacy | 1987

Coding for a Believable Specification to Implementation Mapping

William D. Young; John McHugh

Abstract: One criterion for Beyond Al certification according to the DoD Trusted Computer Systems Evaluation Criteria will be code-level verification. We argue that, while verification at the actual code level may be infeasible for large secure systems, it is possible to push the verification to a low level of abstraction and then map the specification in an intuitive manner to the source code. Providing a suitable mapping requires adhering to a strict discipline on both the specification and code sides. We discuss the issues involved in this problem, particularizing the discussion to a mapping from Gypsy specifications to C code.


annual computer security applications conference | 1989

A risk driven process model for the development of trusted systems

Ann B. Marmor-Squires; John McHugh; Martha Branstad; Bonnie P. Danner; Lou Nagy; Pat Rougeau; Daniel F. Sterne

This paper presents the initial results of a DARPA-funded research effort to define a development paradigm for high-performance trusted systems in Ada. The paradigm is aimed at improving the construction process and the future products of Ada systems that require both broad trust and high performance. The need for a process model and the notions of trust and assurance are reviewed. The foundation for the process model and its elements are presented. The process model is contrasted with traditional development approaches. The combination of a risk driven approach with the integration of trust and performance engineering into a unified whole appears to offer substantial advantages to system builders.<<ETX>>

Collaboration


Dive into the John McHugh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nancy R. Mead

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Richard C. Linger

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Xinming Ou

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Alan M. Christie

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Howard F. Lipson

Software Engineering Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge