Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin L. Mills is active.

Publication


Featured researches published by Kevin L. Mills.


ieee international conference on cloud computing technology and science | 2011

Comparing VM-Placement Algorithms for On-Demand Clouds

Kevin L. Mills; James J. Filliben; Christopher E. Dabrowski

Much recent research has been devoted to investigating algorithms for allocating virtual machines (VMs) to physical machines (PMs) in infrastructure clouds. Many such algorithms address distinct problems, such as initial placement, consolidation, or tradeoffs between honoring service-level agreements and constraining provider operating costs. Even where similar problems are addressed, each individual research team evaluates proposed algorithms under distinct conditions, using various techniques, often targeted to a small collection of VMs and PMs. In this paper, we describe an objective method that can be used to compare VM-placement algorithms in large clouds, covering tens of thousands of PMs and hundreds of thousands of VMs. We demonstrate our method by comparing 18 algorithms for initial VM placement in on-demand infrastructure clouds. We compare algorithms inspired by open-source code for infrastructure clouds, and by the online bin-packing literature.


IEEE Network | 2001

Expanding confidence in network simulations

John S. Heidemann; Kevin L. Mills; Sri Kumar

Networking engineers increasingly depend on simulation to design and deploy complex, heterogeneous networks. Similarly, networking researchers increasingly depend on simulation to investigate the behavior and performance of new protocol designs. Despite such widespread use of simulation, today there exists little common understanding of the degree of validation required for various applications of simulation. Further, only limited knowledge exists regarding the effectiveness of known validation techniques. To investigate these issues, in May 1999 DARPA and NIST organized a workshop on Network Simulation Validation. This article reports on discussions and consensus about issues that arose at the workshop. We describe best current practices for validating simulations and for validating TCP models across various simulation environments. We also discuss interactions between scale and model validation and future challenges for the community.


IEEE Transactions on Dependable and Secure Computing | 2005

Monitoring the macroscopic effect of DDoS flooding attacks

Jian Yuan; Kevin L. Mills

Creating defenses against flooding-based, distributed denial-of-service (DDoS) attacks requires real-time monitoring of network-wide traffic to obtain timely and significant information. Unfortunately, continuously monitoring network-wide traffic for suspicious activities presents difficult challenges because attacks may arise anywhere at any time and because attackers constantly modify attack dynamics to evade detection. In this paper, we propose a method for early attack detection. Using only a few observation points, our proposed method can monitor the macroscopic effect of DDoS flooding attacks. We show that such macroscopic-level monitoring might be used to capture shifts in spatial-temporal traffic patterns caused by various DDoS attacks and then to inform more detailed detection systems about where and when a DDoS attack possibly arises in transit or source networks. We also show that such monitoring enables DDoS attack detection without any traffic observation in the victim network.


Wireless Communications and Mobile Computing | 2007

A brief survey of self-organization in wireless sensor networks

Kevin L. Mills

Summary Many natural and man-made systems exhibit self-organization, where interactions among components lead to system-wide patterns of behavior. This paper first introduces current, scientific understanding of self-organizing systems and then identifies the main models investigated by computer scientists seeking to apply self-organization to design large, distributed systems. Subsequently, the paper surveys research that uses models of self-organization in wireless sensor networks to provide a variety of functions: sharing processing and communication capacity; forming and maintaining structures; conserving power; synchronizing time; configuring software components; adapting behavior associated with routing, with disseminating and querying for information, and with allocating tasks; and providing resilience by repairing faults and resisting attacks. The paper closes with a summary of open issues that must be addressed before self-organization can be applied routinely during design and deployment of senor networks and other distributed, computer systems. Copyright


workshop on self-healing systems | 2002

Understanding self-healing in service-discovery systems

Christopher E. Dabrowski; Kevin L. Mills

Service-discovery systems aim to provide consistent views of distributed components under varying network conditions. To achieve this aim, designers rely upon a variety of self-healing strategies, including: architecture and topology, failure-detection and recovery techniques, and consistency maintenance mechanisms. In previous work, we showed that various combinations of self-healing strategies lead to significant differences in the ability of service-discovery systems to maintain consistency during increasing network failure. Here, we ask whether the contribution of individual self-healing strategies can be quantified. We give results that quantify the effectiveness of selected combinations of architecture-topology and recovery techniques. Our results suggest that it should prove feasible to quantify the ability of individual self-healing strategies to overcome various failures. A full understanding of the interactions among self-healing strategies would provide designers of distributed systems with the knowledge necessary to build the most effective self-healing systems with minimum overhead.


workshop on software and performance | 2002

Understanding consistency maintenance in service discovery architectures during communication failure

Christopher E. Dabrowski; Kevin L. Mills; J R. Elder

Current trends suggest future software systems will comprise collections of components that combine and recombine dynamically in reaction to changing conditions. Service-discovery protocols, which enable software components to locate available software services and to adapt to changing system topology, provide one foundation for such dynamic behavior. Emerging discovery protocols specify alternative architectures and behaviors, which motivate a rigorous investigation of the properties underlying their designs. Here, we assess the ability of selected designs for service-discovery protocols to maintain consistency in a distributed system during catastrophic communication failure. We use an architecture description language, called Rapide, to model two different architectures (two-party and three-party) and two different consistency-maintenance mechanisms (polling and notification). We use our models to investigate performance differences among combinations of architecture and consistency-maintenance mechanism as interface-failure rate increases. We measure system performance along three dimensions: (1) update responsiveness (How much latency is required to propagate changes?), (2) update effectiveness (What is the probability that a node receives a change?), and (3) update efficiency (How many messages must be sent to propagate a change throughout the topology?). We use Rapide to understand how failure-recovery strategies contribute to differences in performance. We also recommend improvements to architecture description languages.


Proceedings Third Annual International Workshop on Active Middleware Services | 2001

Predicting and controlling resource usage in a heterogeneous active network

Virginie Galtier; Kevin L. Mills; Yannick Carlinet; Stephen F. Bush; Amit B. Kulkarni

Active network technology envisions deployment of virtual execution environments within network elements, such as switches and routers. As a result, inhomogeneous processing can be applied to network traffic. To use such technology safely and efficiently, individual nodes must provide mechanisms to enforce resource limits. This implies that each node must understand the varying resource requirements for specific network traffic. This paper presents an approach to model the CPU time requirements of active applications in a form that can be interpreted among heterogeneous nodes. Further, the paper demonstrates how this approach can be used successfully to control resources consumed at an active-network node and to predict load among nodes in an active network, when integrated within the Active Virtual Network Management Prediction system.


ACM Computing Surveys | 1999

Introduction to the electronic symposium on computer-supported cooperative work

Kevin L. Mills

Computer-supported cooperative work (CSCW) holds great importance and promise for modern society. This paper provides an overview of seventeen papers comprising a symposium on CSCW. The overview also discusses some relationships among the contributions made by each paper, and places those contributions into a larger context by identifying some of the key challenges faced by computer science reseachers who aim to help us work effectively as teams mediated through networks of computers. The paper also describes why the promise of CSCW holds particular salience for the U.S. military. In the context of a military setting, the paper describes five particular challenges for CSCW researchers. While most of these challenges might seem specific to military environments, many others probably already face similar challenges, or soon will, when attempting to collaborate through networks of computers. To support this claim, the paper includes a military scenario that might hit fairly close to home for many, and certainly for civilian emergency response personnel. After discussing the military needs for collaboration technology, the paper briefly outlines for motivation for a recent DARPA research program along these lines. That program, called Intelligent Collaboration and Visualization, sponsored the work reported in this symposium.


Journal of Grid Computing | 2008

Can Economics-based Resource Allocation Prove Effective in a Computation Marketplace?

Kevin L. Mills; Christopher E. Dabrowski

Several companies offer computation on demand for a fee. More companies are expected to enter this business over the next decade, leading to a marketplace for computation resources. Resources will be allocated through economic mechanisms that establish the relative values of providers and customers. Society at large should benefit from discoveries obtained through the vast computing power that will become available. Given such a computation marketplace, can economics-based resource allocation provide benefits for providers, customers and society? To investigate this question, we simulate a Grid economy where individual providers and customers pursue their own ends and we measure resulting effects on system welfare. In our experiments, customers attempt to maximize their individual utilities, while providers pursue strategies chosen from three classes: information-free, utilization-based and economics-based. We find that, during periods of excess demand, economics-based strategies yield overall resource allocation that benefits system welfare. Further, economics-based strategies respond well to sudden overloads caused by temporary provider failures. During periods of moderate demand, we find that economics-based strategies provide ample system welfare, comparable with that of utilization-based strategies. We also identify and discuss key factors that arise when using economic mechanisms to allocate resources in a computation marketplace.


ACM Transactions on Software Engineering and Methodology | 2000

A knowledge-based method for inferring semantic concepts from visual models of system behavior

Kevin L. Mills; Hassan Gomaa

Software designers use visual models, such as data flow/control flow diagrams or object collaboration diagrams, to express system behavior in a form that can be understood easily by users and by pogrammers, and from which designers can generate a software architecture. The research described in this paper is motivated by a desire to provide an automated designers assistant that can generate software architectures for concurrent systems directly from behavioral models expressed visually as flow diagrams. To achieve this goal, an automated designers assistant must be capable of interpreting flow diagrams in semantic, rather than syntactic, terms. While semantic concepts can be attached manually to diagrams using labels, such as stereotypes in the Unified Model Language (UML), this paper considers the possibility of providing autmated assistance to infer appropriate tags for symbols on a flow diagram. The approach relies upon constructing an underlying metamodel that defines semantic concepts based upon (1) syntactic relationships among visual symbols and (2) inheritance relationships among semantic concepts. Given such a metamodel, a rule-based inference engine can, in many situations, infer the presence of semantic concepts on flow diagram, and can tag symbols accordingly. Futher, an object-oriented query system can compare semantic tags on digram instances for conformance with their definition in the metamodel. To illustrate the approach, the paper describes a metamodel for data flow/control flow diagrams used in the context of a specific software modeling method, Concurrent Object-Based Real-time Analysis (COBRA). The metamodel is implemented using an expert-system shell, CLIPS V6.0, which integrates an object-oriented language with a rule-based inference engine. The paper applies the implemented metamodel to design software for an automobile cruise-control system and provides an evaluation of the approach based upon results from four case studies. For the case studies, the implemented metamodel recognized, automatically and correctly, the existence of 86% of all COBRA semantic concepts within the flow diagrams. Varying degrees of human assistance were used to correctly identify the remaining semantic concepts within the diagrams: in two percent of the cases the implemented metamodel reached tentative classifications that a designer was asked to confirm or override; in four percent of the cases a designer was asked to provide additional information before a concept was classified; in the remaining eight percent of the cases the designer was asked to identify the concept.

Collaboration


Dive into the Kevin L. Mills's collaboration.

Top Co-Authors

Avatar

Christopher E. Dabrowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James J. Filliben

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Stephen Quirolgico

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Rose

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

R Aronoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Virginie Galtier

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Yannick Carlinet

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hassan Gomaa

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Richard Colella

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge