Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard P. Lippmann is active.

Publication


Featured researches published by Richard P. Lippmann.


IEEE Assp Magazine | 1987

An introduction to computing with neural nets

Richard P. Lippmann

Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.


ieee symposium on security and privacy | 2002

Automated generation and analysis of attack graphs

Oleg Sheyner; Joshua W. Haines; Somesh Jha; Richard P. Lippmann; Jeannette M. Wing

An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.


Neural Computation | 1991

Neural Network Classifiers Estimate Bayesian a posteriori Probabilities

Michael D. Richard; Richard P. Lippmann

Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 1 of M (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and a priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.


recent advances in intrusion detection | 2000

The 1999 DARPA off-line intrusion detection evaluation

Richard P. Lippmann; Joshua W. Haines; David J. Fried; Jonathan Korba; Kumar Das

An important goal of the ongoing DARPA intrusion detection evaluations is to promote development of intrusion detection systems that can detect stealthy attacks which might be launched by well-funded hostile nations or terrorists organizations. This goal can only be reached if such stealthy attacks are included in the DARPA evaluations. This report describes new and known approaches and strategies that were used to make attacks stealthy for the 1999 DARPA Intrusion Detection Evaluation. It explains why some attacks used in the initial 1998 evaluation were easy to detect, presents general guidelines that were followed for the 1999 evaluation, includes many examples of stealthy scripts, and includes perl and shell scripts that can be use to implement stealthy procedures.


Speech Communication | 1997

Speech recognition by machines and humans

Richard P. Lippmann

Abstract This paper reviews past work comparing modern speech recognition systems and humans to determine how far recent dramatic advances in technology have progressed towards the goal of human-like performance. Comparisons use six modern speech corpora with vocabularies ranging from 10 to more than 65,000 words and content ranging from read isolated words to spontaneous conversations. Error rates of machines are often more than an order of magnitude greater than those of humans for quiet, wideband, read speech. Machine performance degrades further below that of humans in noise, with channel variability, and for spontaneous speech. Humans can also recognize quiet, clearly spoken nonsense syllables and nonsense sentences with little high-level grammatical information. These comparisons suggest that the human-machine performance gap can be reduced by basic research on improving low-level acoustic-phonetic modeling, on improving robustness with noise and channel variability, and on more accurately modeling spontaneous speech.


darpa information survivability conference and exposition | 2000

Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation

Richard P. Lippmann; David J. Fried; Isaac Graf; Joshua W. Haines; Kristopher R. Kendall; David McClung; Dan Weber; Seth E. Webster; Dan Wyschogrod; Robert K. Cunningham; Marc A. Zissman

An intrusion detection evaluation test bed was developed which generated normal traffic similar to that on a government site containing 100s of users on 1000s of hosts. More than 300 instances of 38 different automated attacks were launched against victim UNIX hosts in seven weeks of training data and two weeks of test data. Six research groups participated in a blind evaluation and results were analyzed for probe, denial-of-service (DoS) remote-to-local (R2L), and user to root (U2R) attacks. The best systems detected old attacks included in the training data, at moderate detection rates ranging from 63% to 93% at a false alarm rate of 10 false alarms per day. Detection rates were much worse for new and novel R2L and DoS attacks included only in the test data. The best systems failed to detect roughly half these new attacks which included damaging access to root-level privileges by remote users. These results suggest that further research should focus on developing techniques to find new attacks instead of extending existing rule-based approaches.


international conference on acoustics, speech, and signal processing | 1987

Multi-style training for robust isolated-word speech recognition

Richard P. Lippmann; Edward A. Martin; Douglas B. Paul

A new training procedure called multi-style training has been developed to improve performance when a recognizer is used under stress or in high noise but cannot be trained in these conditions. Instead of speaking normally during training, talkers use different, easily produced, talking styles. This technique was tested using a speech data base that included stress speech produced during a workload task and when intense noise was presented through earphones. A continuous-distribution talker-dependent Hidden Markov Model (HMM) recognizer was trained both normally (5 normally spoken tokens) and with multi-style training (one token each from normal, fast, clear, loud, and question-pitch talking styles). The average error rate under stress and normal conditions fell by more than a factor of two with multi-style training and the average error rate under conditions sampled during training fell by a factor of four.


annual computer security applications conference | 2006

Practical Attack Graph Generation for Network Defense

Kyle Ingols; Richard P. Lippmann; Keith Piwowarski

Attack graphs are a valuable tool to network defenders, illustrating paths an attacker can use to gain access to a targeted network. Defenders can then focus their efforts on patching the vulnerabilities and configuration errors that allow the attackers the greatest amount of access. We have created a new type of attack graph, the multiple-prerequisite graph, that scales nearly linearly as the size of a typical network increases. We have built a prototype system using this graph type. The prototype uses readily available source data to automatically compute network reachability, classify vulnerabilities, build the graph, and recommend actions to improve network security. We have tested the prototype on an operational network with over 250 hosts, where it helped to discover a previously unknown configuration error. It has processed complex simulated networks with over 50,000 hosts in under four minutes


recent advances in intrusion detection | 2000

Improving intrusion detection performance using keyword selection and neural networks

Richard P. Lippmann; Robert K. Cunningham

Abstract The most common computer intrusion detection systems detect signatures of known attacks by searching for attack-specific keywords in network traffic. Many of these systems suffer from high false-alarm rates (often hundreds of false alarms per day) and poor detection of new attacks. Poor performance can be improved using a combination of discriminative training and generic keywords. Generic keywords are selected to detect attack preparations, the actual break-in, and actions after the break-in. Discriminative training weights keyword counts to discriminate between the few attack sessions where keywords are known to occur and the many normal sessions where keywords may occur in other contexts. This approach was used to improve the baseline keyword intrusion detection system used to detect user-to-root attacks in the 1998 DARPA Intrusion Detection Evaluation. It reduced the false-alarm rate required to obtain 80% correct detections by two orders of magnitude to roughly one false alarm per day. The improved keyword system detects new as well as old attacks in this database and has roughly the same computation requirements as the original baseline system. Both generic keywords and discriminant training were required to obtain this large performance improvement.


recent advances in intrusion detection | 2000

Analysis and Results of the 1999 DARPA Off-Line Intrusion Detection Evaluation

Richard P. Lippmann; Joshua W. Haines; David J. Fried; Jonathan Korba; Kumar Das

Eight sites participated in the second DARPA off-line intrusion detection evaluation in 1999. Three weeks of training and two weeks of test data were generated on a test bed that emulates a small government site. More than 200 instances of 58 attack types were launched against victim UNIX and Windows NT hosts. False alarm rates were low (less than 10 per day). Best detection was provided by network-based systems for old probe and old denial-of-service (DoS) attacks and by host-based systems for Solaris user-to-root (U2R) attacks. Best overall performance would have been provided by a combined system that used both host- and network-based intrusion detection. Detection accuracy was poor for previously unseen new, stealthy, and Windows NT attacks. Ten of the 58 attack types were completely missed by all systems. Systems missed attacks because protocols and TCP services were not analyzed at all or to the depth required, because signatures for old attacks did not generalize to new attacks, and because auditing was not available on all hosts.

Collaboration


Dive into the Richard P. Lippmann's collaboration.

Top Co-Authors

Avatar

Kyle Ingols

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert K. Cunningham

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David J. Fried

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joshua W. Haines

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Seth E. Webster

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric I. Chang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marc A. Zissman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stephen W. Boyer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William Y. Huang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles R. Jankowski

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge