Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher E. Dabrowski is active.

Publication


Featured researches published by Christopher E. Dabrowski.


ieee international conference on cloud computing technology and science | 2011

Comparing VM-Placement Algorithms for On-Demand Clouds

Kevin L. Mills; James J. Filliben; Christopher E. Dabrowski

Much recent research has been devoted to investigating algorithms for allocating virtual machines (VMs) to physical machines (PMs) in infrastructure clouds. Many such algorithms address distinct problems, such as initial placement, consolidation, or tradeoffs between honoring service-level agreements and constraining provider operating costs. Even where similar problems are addressed, each individual research team evaluates proposed algorithms under distinct conditions, using various techniques, often targeted to a small collection of VMs and PMs. In this paper, we describe an objective method that can be used to compare VM-placement algorithms in large clouds, covering tens of thousands of PMs and hundreds of thousands of VMs. We demonstrate our method by comparing 18 algorithms for initial VM placement in on-demand infrastructure clouds. We compare algorithms inspired by open-source code for infrastructure clouds, and by the online bin-packing literature.


workshop on self-healing systems | 2002

Understanding self-healing in service-discovery systems

Christopher E. Dabrowski; Kevin L. Mills

Service-discovery systems aim to provide consistent views of distributed components under varying network conditions. To achieve this aim, designers rely upon a variety of self-healing strategies, including: architecture and topology, failure-detection and recovery techniques, and consistency maintenance mechanisms. In previous work, we showed that various combinations of self-healing strategies lead to significant differences in the ability of service-discovery systems to maintain consistency during increasing network failure. Here, we ask whether the contribution of individual self-healing strategies can be quantified. We give results that quantify the effectiveness of selected combinations of architecture-topology and recovery techniques. Our results suggest that it should prove feasible to quantify the ability of individual self-healing strategies to overcome various failures. A full understanding of the interactions among self-healing strategies would provide designers of distributed systems with the knowledge necessary to build the most effective self-healing systems with minimum overhead.


workshop on software and performance | 2002

Understanding consistency maintenance in service discovery architectures during communication failure

Christopher E. Dabrowski; Kevin L. Mills; J R. Elder

Current trends suggest future software systems will comprise collections of components that combine and recombine dynamically in reaction to changing conditions. Service-discovery protocols, which enable software components to locate available software services and to adapt to changing system topology, provide one foundation for such dynamic behavior. Emerging discovery protocols specify alternative architectures and behaviors, which motivate a rigorous investigation of the properties underlying their designs. Here, we assess the ability of selected designs for service-discovery protocols to maintain consistency in a distributed system during catastrophic communication failure. We use an architecture description language, called Rapide, to model two different architectures (two-party and three-party) and two different consistency-maintenance mechanisms (polling and notification). We use our models to investigate performance differences among combinations of architecture and consistency-maintenance mechanism as interface-failure rate increases. We measure system performance along three dimensions: (1) update responsiveness (How much latency is required to propagate changes?), (2) update effectiveness (What is the probability that a node receives a change?), and (3) update efficiency (How many messages must be sent to propagate a change throughout the topology?). We use Rapide to understand how failure-recovery strategies contribute to differences in performance. We also recommend improvements to architecture description languages.


Journal of Grid Computing | 2008

Can Economics-based Resource Allocation Prove Effective in a Computation Marketplace?

Kevin L. Mills; Christopher E. Dabrowski

Several companies offer computation on demand for a fee. More companies are expected to enter this business over the next decade, leading to a marketplace for computation resources. Resources will be allocated through economic mechanisms that establish the relative values of providers and customers. Society at large should benefit from discoveries obtained through the vast computing power that will become available. Given such a computation marketplace, can economics-based resource allocation provide benefits for providers, customers and society? To investigate this question, we simulate a Grid economy where individual providers and customers pursue their own ends and we measure resulting effects on system welfare. In our experiments, customers attempt to maximize their individual utilities, while providers pursue strategies chosen from three classes: information-free, utilization-based and economics-based. We find that, during periods of excess demand, economics-based strategies yield overall resource allocation that benefits system welfare. Further, economics-based strategies respond well to sudden overloads caused by temporary provider failures. During periods of moderate demand, we find that economics-based strategies provide ample system welfare, comparable with that of utilization-based strategies. We also identify and discuss key factors that arise when using economic mechanisms to allocate resources in a computation marketplace.


Proceedings of Fourth Annual International Workshop on Active Middleware Services | 2002

Understanding consistency maintenance in service discovery architectures in response to message loss

Christopher E. Dabrowski; Kevin L. Mills; Jesse Elder

Current trends suggest future software systems will comprise collections of components that combine and recombine dynamically in reaction to changing conditions. Service-discovery protocols, which enable software components to locate available software services and to adapt to changing system topology, provide one foundation for such dynamic behavior. Emerging discovery protocols specify alternative architectures and behaviors, which motivate a rigorous investigation of the properties underlying their designs. Here, we assess the ability of selected designs for service-discovery protocols to maintain consistency in a distributed system during severe message loss. We use an architecture description language, called Rapide, to model two different architectures (two-party and three-party) and two different consistency-maintenance mechanisms (polling and notification). We use our models to investigate performance differences among combinations of architecture and consistency-maintenance mechanism as message-loss rate increases. We measure system performance along three dimensions: (1) update responsiveness (how much latency is required to propagate changes); (2) update effectiveness (what is the probability that a node receives a change?), and (3) update efficiency (how many messages must be sent to propagate a change throughout the topology?).


international conference on communications | 2003

Adaptive jitter control for UPnP M-search

Kevin L. Mills; Christopher E. Dabrowski

Selected service-discovery systems allow clients to issue multicast queries to locate network devices and services. Qualifying devices and services respond directly to clients; thus in large network, potential exists for responses to implode on a client, overrunning available resources. To limit implosion, one service-discovery system, UPnP, permits clients to include a jitter bound in multicast (M-search) queries. Qualifying devices use the jitter bound to randomize timing of their responses. Initially, clients lack sufficient knowledge to select an appropriate jitter bound, which varies with network size. In this paper, we characterize the performance of UPnP M-search for various combinations of jitter bound and network size. In addition, we evaluate the performance and costs of four algorithms that might be used for adaptive jitter control. Finally, we suggest an alternative to M-search for large networks.


international conference on cloud computing | 2011

An Efficient Sensitivity Analysis Method for Large Cloud Simulations

Kevin L. Mills; James J. Filliben; Christopher E. Dabrowski

Simulations of large distributed systems, such as infrastructure clouds, usually entail a large space of parameters and responses that prove impractical to explore. To reduce the space of inputs, experimenters, guided by domain knowledge and ad hoc methods, typically select a subset of parameters and values to simulate. Similarly, experimenters typically use ad hoc methods to reduce the number of responses to analyze. Such ad hoc methods can result in experiment designs that miss significant parameter combinations and important responses, or that overweight selected parameters and responses. When this occurs, the experiment results and subsequent analyses can be misleading. In this paper, we apply an efficient sensitivity analysis method to demonstrate how relevant parameter combinations and behaviors can be identified for an infrastructure Cloud simulator that is intended to compare resource allocation algorithms. Researchers can use the techniques we demonstrate here to design experiments for large Cloud simulations, leading to improved quality in derived research results and findings.Much recent research has been devoted to investigating algorithms for allocating virtual machines (VMs) to physical machines (PMs) in infrastructure clouds. Many such algorithms address distinct problems, such as initial placement, consolidation, or tradeoffs between honoring service-level agreements and constraining provider operating costs. Even where similar problems are addressed, each individual research team evaluates proposed algorithms under distinct conditions, using various techniques, often targeted to a small collection of VMs and PMs. In this paper, we describe an objective method that can be used to compare VM-placement algorithms in large clouds, covering tens of thousands of PMs and hundreds of thousands of VMs. We demonstrate our method by comparing 18 algorithms for initial VM placement in on-demand infrastructure clouds. We compare algorithms inspired by open-source code for infrastructure clouds, and by the online bin-packing literature.


Lecture Notes in Computer Science | 2006

Investigating global behavior in computing grids

Kevin L. Mills; Christopher E. Dabrowski

We investigate effects of spoofing attacks on the scheduling and execution of basic application workflows in a moderately loaded grid computing system using a simulation model based on standard specifications. We conduct experiments to first subject this grid to spoofing attacks that reduce resource availability and increase relative load. A reasonable change in client behavior is then introduced to counter the attack, which unexpectedly causes global performance degradation. To understand the resulting global behavior, we adapt multidimensional analyses as a measurement approach for analysis of complex information systems. We use this approach to show that the surprising performance fall-off occurs because the change in client behavior causes a rearrangement of the global job execution schedule in which completion times inadvertently increase. Finally, we argue that viewing distributed resource allocation as a self-organizing process improves understanding of behavior in distributed systems such as computing grids.


Encyclopedia of Software Engineering | 1992

Database Management Systems in Engineering

Katherine C. Morris; Mary Mitchell; Christopher E. Dabrowski; Elizabeth N. Fong

Most engineering-related software addresses specific problems. These problems are typically computation-intensive and limited in scope. Until relatively recently this approach has been an effective use of computer and human resources. However, in the future, engineering and manufacturing processes will need more integrated product development environments. Both cultural and procedural changes are needed to support the engineering environments of the future, and these changes will require integrated software systems. Databases are essential for integrating software and for reliably sharing data among diverse groups of people and applications. Database technology will be an integral part of the emerging software environments. In this article the application of database technology to engineering problems is examined for different levels of complexity within the computing environment. This introduction provides some background on the topic and includes the description of an example that is used throughout the article. In the first section, the use of database technology for standalone applications is considered. Mechanisms for data representation to support engineering applications are particularly important for implementing engineering software. The second section discusses database techniques for managing changes within the software environment. The third section discusses considerations for supporting multiple engineers working cooperatively. The state of database technology is discussed in the concluding section. Keywords: engineering problem; database schema; physical organization; change management; schema evolution; cooperative engineering environment; tools; standard interfaces; commercial databases; state-of-the-art


Information & Management | 1989

Integrating a knowledge-based component into a physical database design system

Christopher E. Dabrowski; David K Jefferson; John V. Carlis; Salvatore T. March

Abstract Physical database design is a difficult and complex process. Algorithmic approaches are appropriate for design subproblems, such as record segmentation and access path selection, but are infeasible for global design. A major problem with algorithmic approaches is that, for realistic databases, the number of alternative schema possibilities that must be evaluated to generate an optimal design is extremely large. Our design system addresses this problem by combining a knowledge-based component with an algorithmic component. The knowledge-based component reduces the solution space to a reasonable size by producing a small number of efficient schema alternatives. The algorithmic component develops a low cost design for each alternative. An example of the application of the KBS component of the design system is presented.

Collaboration


Dive into the Christopher E. Dabrowski's collaboration.

Top Co-Authors

Avatar

Kevin L. Mills

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Fern Y. Hunt

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elizabeth N. Fong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James J. Filliben

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Katherine Morrison

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Stephen Quirolgico

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David K Jefferson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elena R. Messina

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Hui-Min Huang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

J R. Elder

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge