Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve R. White is active.

Publication


Featured researches published by Steve R. White.


ieee symposium on security and privacy | 1991

Directed-graph epidemiological models of computer viruses

Jeffrey O. Kephart; Steve R. White

The strong analogy between biological viruses and their computational counterparts has motivated the authors to adapt the techniques of mathematical epidemiology to the study of computer virus propagation. In order to allow for the most general patterns of program sharing, a standard epidemiological model is extended by placing it on a directed graph and a combination of analysis and simulation is used to study its behavior. The conditions under which epidemics are likely to occur are determined, and, in cases where they do, the dynamics of the expected number of infected individuals are examined as a function of time. It is concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold.<<ETX>>


ieee symposium on security and privacy | 1993

Measuring and modeling computer virus prevalence

Jeffrey O. Kephart; Steve R. White

To understand the current extent of the computer virus problem and predict its future course, the authors have conducted a statistical analysis of computer virus incidents in a large, stable sample population of PCs and developed new epidemiological models of computer virus spread. Only a small fraction of all known viruses have appeared in real incidents, partly because many viruses are below the theoretical epidemic threshold. The observed sub-exponential rate of viral spread can be explained by models of localized software exchange. A surprisingly small fraction of machines in well-protected business environments are infected. This may be explained by a model in which, once a machine is found to be infected, neighboring machines are checked for viruses. This kill signal idea could be implemented in networks to greatly reduce the threat of viral spread. A similar principle has been incorporated into a cost-effective anti-virus policy for organizations which works quite well in practice.<<ETX>>


international conference on autonomic computing | 2004

An architectural approach to autonomic computing

Steve R. White; James E. Hanson; Ian Whalley; David M. Chess; Jeffrey O. Kephart

We describe an architectural approach to achieving the goals of autonomic computing. The architecture that we outline describes interfaces and behavioral requirements for individual system components, describes how interactions among components are established, and recommends design patterns that engender the desired system-level properties of self-configuration, self-optimization, self-healing and self-protection. We have validated many of these ideas in two prototype autonomic computing systems.


IEEE Spectrum | 1993

Computers and epidemiology

Jeffrey O. Kephart; Steve R. White; David M. Chess

Analogies with biological disease with topological considerations added, which show that the spread of computer viruses can be contained, and the resulting epidemiological model are examined. The findings of computer virus epidemiology show that computer viruses are far less rife than many have claimed, that many fail to thrive, that even successful viruses spread at nowhere near the exponential rate that some have claimed, and that centralized reporting and response within an organization is an extremely effective defense. A case study is presented, and some steps for companies to take are suggested.<<ETX>>


international conference on computer design | 2008

Concepts of scale in simulated annealing

Steve R. White

Simulated annealing is a powerful technique for finding near‐optimal solutions to NP‐complete combinatorial optimization problems. In this technique, the states of a physical system are generalized to states of a system being optimized, the physical energy is generalized to the function being minimized, and the temperature is generalized to a control parameter for the optimization process. Wire length minimization in circuit placement is used as an example to show how ideas from statistical physics can elucidate the annealing process. The mean of the distribution of states in energy is a maximum energy scale of the system, its standard deviation defines the maximum temperature scale, and the minimum change in energy defines the minimum temperature scale. These temperature scales tell us where to begin and end an annealing schedule. The ‘‘size’’ of a class of moves within the state space of the system is defined as the average change in the energy induced by moves of that class. These move scales are relat...


adaptive agents and multi-agents systems | 2004

A Multi-Agent Systems Approach to Autonomic Computing

Gerald Tesauro; David M. Chess; William E. Walsh; Rajarshi Das; Alla Segal; Ian Whalley; Jeffrey O. Kephart; Steve R. White

The goal of autonomic computing is to create computing systems capable of managing themselves to a far greater extent than they do today. This paper presents Unity, a decentralized architecture for autonomic computing based on multiple interacting agents called autonomic elements. We illustrate how the Unity architecture realizes a number of desired autonomic system behaviors including goal-driven self-assembly, self-healing, and real-time self-optimization. We then present a realistic prototype implementation, showing how a collection of Unity elements self-assembles, recovers from certain classes of faults, and manages the use of computational resources (e.g. servers) in a dynamic multi-application environment. In Unity, an autonomic element within each application environment computes a resource-level utility function based on information specified in that applicationýs service-level utility function. Resource-level utility functions from multiple application environments are sent to a Resource Arbiter element, which computes a globally optimal allocation of servers across the applications. We present illustrative empirical data showing the behavior of our implemented system in handling realistic Web-based transactional workloads running on a Linux cluster.


ieee symposium on security and privacy | 1987

ABYSS: ATrusted Architecture for Software Protection

Steve R. White

ABYSS (A Basic Yorktown Security System) is an architecture for the trusted execution of application software. It supports a uniform security service across the. range of computing systems. The use of ABYSS discussed in this paper is oriented towards solving the software protection problem, especially in the lower end of the market. Both current and planned software distribution channels are supportable by the architecture, and the system is nearly transparent to legitimate users. A novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication. Software vendors may use the system to obtain technical enforcement of virtually any terms and conditions of the sale of their software, including such things as rental software. Software may be transferred between systems, and backed up to guard against loss in case of failure. We discuss the problem of protecting software on these systems, and offer guidelines to its solution. ABYSS is shown to be a general security base, in which many security applications may execute.


Ibm Systems Journal | 2003

Security in an autonomic computing environment

David M. Chess; Charles C. Palmer; Steve R. White

System and network security are vital parts of any autonomic computing solution. The ability of a system to react consistently and correctly to situations ranging from benign but unusual events to outright attacks is key to the achievement of the goals of self-protection, self-healing, and self-optimization. Because they are often built around the interconnection of elements from different administrative domains, autonomic systems raise additional security challenges, including the establishment of a trustworthy system identity, automatically handling changes in system configuration and interconnections, and greatly increased configuration complexity. On the other hand, the techniques of autonomic computing offer the promise of making systems more secure, by effectively and automatically enforcing high-level security policies. In this paper, we discuss these and other security and privacy challenges posed by autonomic systems and provide some recommendations for how these challenges may be met.


Proceedings of the 1997 International Virus Bulletin Conference, San Francisco, California, October, 1997 | 1999

Blueprint for a Computer Immune System

Jeffrey O. Kephart; Gregory B. Sorkin; Morton Swimmer; Steve R. White

There is legitimate concern that, within the next few years, the Internet will proviEn a fertile medium for new breeds of computer viruses capable of spreading orders of magnituEn faster than todays viruses. To counter this threat, we have Enveloped an immune system for computers that senses the presence of a previously unknown pathogen, and within minutes automatically Enrives and Enploys a prescription for Entecting and removing it. The system is being integrated with a commercial anti-virus product, IBM AntiVirus, and will be available as a pilot in 1997


IEEE Transactions on Software Engineering | 1990

ABYSS: an architecture for software protection

Steve R. White; Liam David Comerford

ABYSS (a basic Yorktown security system) is an architecture for protecting the execution of application software. It supports a uniform security service across the range of computing systems. The use of ABYSS in solving the software protection problem, especially in the lower end of the market, is discussed. Both current and planned software distribution channels are supportable by the architecture, and the system is nearly transparent to legitimate users. A novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication. Software vendors may use the system to obtain technical enforcement of virtually any terms and conditions of the sale of their software, including such things as rental software. Software may be transferred between systems, and backed up to guard against loss in case of failure. The problem of protecting software on these systems is discussed, and guidelines to its solution are offered. >

Researchain Logo
Decentralizing Knowledge