Steven A. Hofmeyr
Lawrence Berkeley National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven A. Hofmeyr.
Journal of Computer Security | 1998
Steven A. Hofmeyr; Stephanie Forrest; Anil Somayaji
A method is introduced for detecting intrusions at the level of privileged processes. Evidence is given that short sequences of system calls executed by running processes are a good discriminator between normal and abnormal operating characteristics of several common UNIX programs. Normal behavior is collected in two waysc Synthetically, by exercising as many normal modes of usage of a program as possible, and in a live user environment by tracing the actual execution of the program. In the former case several types of intrusive behavior were studieds in the latter case, results were analyzed for false positives.
Communications of The ACM | 1997
Stephanie Forrest; Steven A. Hofmeyr; Anil Somayaji
This review describes a body of work on computational immune systems that behave analogously to the natural immune system. These artificial immune systems (AIS) simulate the behavior of the natural immune system and in some cases have been used to solve practical engineering problems such as computer security. AIS have several strengths that can complement wet lab immunology. It is easier to conduct simulation experiments and to vary experimental conditions, for example, to rule out hypotheses; it is easier to isolate a single mechanism to test hypotheses about how it functions; agent-based models of the immune system can integrate data from several different experiments into a single in silico experimental system.
electronic commerce | 2000
Steven A. Hofmeyr; Stephanie Forrest
An artificial immune system (ARTIS) is described which incorporates many properties of natural immune systems, including diversity, distributed computation, error tolerance, dynamic learning and adaptation, and self-monitoring. ARTIS is a general framework for a distributed adaptive system and could, in principle, be applied to many domains. In this paper, ARTIS is applied to computer security in the form of a network intrusion detection system called LISYS. LISYS is described and shown to be effective at detecting intrusions, while maintaining low false positive rates. Finally, similarities and differences between ARTIS and Hollands classifier systems are discussed.
new security paradigms workshop | 1998
Anil Somayaji; Steven A. Hofmeyr; Stephanie Forrest
Natural immune systems provide a rich source of inspiration for computer security in the age of the Internet. Immune systems have many features that are desirable for the imperfect, uncontrolled, and open environments in which most computers currently exist. These include distributability, diversity, disposability, adaptability, autonomy, dynamic coverage, anomaly detection, multiple layers, identity via behavior, no trusted components, and imperfect detection. These principles suggest a wide variety of architectures for a computer immune system.
annual computer security applications conference | 2008
Stephanie Forrest; Steven A. Hofmeyr; Anil Somayaji
Computer security systems protect computers and networks from unauthorized use by external agents and insiders. The similarities between computer security and the problem of protecting a body against damage from externally and internally generated threats are compelling and were recognized as early as 1972 when the term computer virus was coined. The connection to immunology was made explicit in the mid 1990s, leading to a variety of prototypes, commercial products, attacks, and analyses. The paper reviews one thread of this active research area, focusing on system-call monitoring and its application to anomaly intrusion detection and response. The paper discusses the biological principles illustrated by the method, followed by a brief review of how system call monitoring was used in anomaly intrusion detection and the results that were obtained. Proposed attacks against the method are discussed, along with several important branches of research that have arisen since the original papers were published. These include other data modeling methods, extensions to the original system call method, and rate limiting responses. Finally, the significance of this body of work and areas of possible future investigation are outlined in the conclusion.
international parallel and distributed processing symposium | 2010
Costin Iancu; Steven A. Hofmeyr; Filip Blagojevic; Yili Zheng
Existing multicore systems already provide deep levels of thread parallelism; hybrid programming models and composability of parallel libraries are very active areas of research within the scientific programming community. As more applications and libraries become parallel, scenarios where multiple threads compete for a core are unavoidable. In this paper we evaluate the impact of task oversubscription on the performance of MPI, OpenMP and UPC implementations of the NAS Parallel Benchmarks on UMA and NUMA multi-socket architectures. We evaluate explicit thread affinity management against the default Linux load balancing and discuss sharing and partitioning system management techniques. Our results indicate that oversubscription provides beneficial effects for applications running in competitive environments. Sharing all the available cores between applications provides better throughput than explicit partitioning. Modest levels of oversubscription improve system throughput by 27% and provide better performance isolation of applications from their co-runners: best overall throughput is always observed when applications share cores and each is executed with multiple threads per core. Rather than ¿resource¿ symbiosis, our results indicate that the determining behavioral factor when applications share a system is the granularity of the synchronization operations.
ieee international conference on high performance computing data and analytics | 2011
Khaled Z. Ibrahim; Steven A. Hofmeyr; Costin Iancu; Eric Roman
Live migration is a widely used technique for resource consolidation and fault tolerance. KVM and Xen use iterative pre-copy approaches which work well in practice for commercial applications. In this paper, we study pre-copy live migration of MPI and OpenMP scientific applications running on KVM and present a detailed performance analysis of the migration process. We show that due to a high rate of memory changes, the current KVM rate control and target downtime heuristics do not cope well with HPC applications: statically choosing rate limits and downtimes is infeasible and current mechanisms sometimes provide sub-optimal performance. We present a novel on-line algorithm able to provide minimal downtime and minimal impact on end-to-end application performance. At the core of this algorithm is controlling migration based on the application memory rate of change.
design automation conference | 2013
Juan A. Colmenares; Gage Eads; Steven A. Hofmeyr; Sarah Bird; Miquel Moreto; David Chou; Brian Gluzman; Eric Roman; Davide B. Bartolini; Nitesh Mor; Krste Asanovic; John Kubiatowicz
Adaptive Resource-Centric Computing (ARCC) enables a simultaneous mix of high-throughput parallel, real-time, and interactive applications through automatic discovery of the correct mix of resource assignments necessary to achieve application requirements. This approach, embodied in the Tessellation manycore operating system, distributes resources to QoS domains called cells. Tessellation separates global decisions about the allocation of resources to cells from application-specific scheduling of resources within cells. We examine the implementation of ARCC in the Tessellation OS, highlight Tessellations ability to provide predictable performance, and investigate the performance of Tessellation services within cells.
Journal of Cybersecurity | 2016
Benjamin Edwards; Steven A. Hofmeyr; Stephanie Forrest
Recent widely publicized data breaches have exposed thepersonal information of hundreds of millions of people. Somereports point to alarming increases in both the size and fre-quency of data breaches, spurring institutions around theworld to address what appears to be a worsening situation.But, is the problem actually growing worse? In this paper,we study a popular public dataset and develop BayesianGeneralized Linear Models to investigate trends in databreaches. Analysis of the model shows that neither sizenor frequency of data breaches has increased over the pastdecade. We nd that the increases that have attracted at-tention can be explained by the heavy-tailed statistical dis-tributions underlying the dataset. Speci cally, we nd thatdata breach size is log-normally distributed and that thedaily frequency of breaches is described by a negative bi-nomial distribution. These distributions may provide cluesto the generative mechanisms that are responsible for thebreaches. Additionally, our model predicts the likelihood ofbreaches of a particular size in the future. For example, we nd that in the next year there is only a 31% chance of abreach of 10 million records or more in the US. Regardlessof any trend, data breaches are costly, and we combine themodel with two di erent cost models to project that in thenext three years breaches could cost up to
ieee international conference on high performance computing data and analytics | 2015
Evangelos Georganas; Aydin Buluç; Jarrod Chapman; Steven A. Hofmeyr; Chaitanya Aluru; Rob Egan; Leonid Oliker; Daniel Rokhsar; Katherine A. Yelick
55 billion.