Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taylor Kidd is active.

Publication


Featured researches published by Taylor Kidd.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

Scheduling resources in multi-user, heterogeneous, computing environments with SmartNet

Richard F. Freund; Michael Gherrity; Stephen L. Ambrosius; Mark Campbell; Mike Halderman; Debra A. Hensgen; Elaine G. Keith; Taylor Kidd; Matt Kussow; John D. Lima; Francesca Mirabile; Lantz Moore; Brad Rust; Howard Jay Siegel

It is increasingly common for computer users to have access to several computers on a network, and hence to be able to execute many of their tasks on any of several computers. The choice of which computers execute which tasks is commonly determined by users based on a knowledge of computer speeds for each task and the current load on each computer. A number of task scheduling systems have been developed that balance the load of the computers on the network, but such systems tend to minimize the idle time of the computers rather than minimize the idle time of the users. The paper focuses on the benefits that can be achieved when the scheduling system considers both the computer availabilities and the performance of each task on each computer. The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment. Results are presented where the operation of hundreds of different networks of computers running thousands of different mixes of tasks are simulated in a batch environment. These results indicate that, for the computer environments simulated, SmartNet outperforms both load balancing and user directed assignments, based on the maximum time users must wait for their tasks to finish.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

The relative performance of various mapping algorithms is independent of sizable variances in run-time predictions

Robert Armstrong; Debra A. Hensgen; Taylor Kidd

The author studies the performance of four mapping algorithms. The four algorithms include two naive ones: opportunistic load balancing (OLB), and limited best assignment (LBA), and two intelligent greedy algorithms: an O(nm) greedy algorithm, and an O(n/sup 2/m) greedy algorithm. All of these algorithms, except OLB, use expected run-times to assign jobs to machines. As expected run-times are rarely deterministic in modern networked and server based systems, he first uses experimentation to determine some plausible run-time distributions. Using these distributions, he next executes simulations to determine how the mapping algorithms perform. Performance comparisons show that the greedy algorithms produce schedules that, when executed, perform better than naive algorithms, even though the exact run-times are not available to the schedulers. He concludes that the use of intelligent mapping algorithms is beneficial, even when the expected time for completion of a job is not deterministic.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

An overview of MSHN: the Management System for Heterogeneous Networks

Debra A. Hensgen; Taylor Kidd; D. St. John; M.C. Schnaidt; Howard Jay Siegel; T.D. Braun; M. Maheswaran; S. Ali; Jong Kook Kim; Cynthia E. Irvine; Timothy E. Levin; R.F. Freund; Matt Kussow; Michael Godfrey; A. Duman; P. Carff; S. Kidd; Viktor K. Prasanna; Prashanth B. Bhat; Ammar H. Alhusaini

The Management System for Heterogeneous Networks (MSHN) is a resource management system for use in heterogeneous environments. This paper describes the goals of MSHN, its architecture, and both completed and ongoing research experiments. MSHNs main goal is to determine the best way to support the execution of many different applications, each with its own quality of service (QoS) requirements, in a distributed, heterogeneous environment. MSHNs architecture consists of seven distributed, potentially replicated components that communicate with one another using CORBA (Common Object Request Broker Architecture). MSHNs experimental investigations include: the accurate, transparent determination of the end-to-end status of resources; the identification of optimization criteria and how non-determinism and the granularity of models affect the performance of various scheduling heuristics that optimize those criteria; the determination of how security should be incorporated between components as well as how to account for security as a QoS attribute; and the identification of problems inherent in application and system characterization.


international symposium on parallel architectures algorithms and networks | 1996

SmartNet: a scheduling framework for heterogeneous computing

Richard F. Freund; Taylor Kidd; Debra A. Hensgen; Lantz Moore

SmartNet is a scheduling framework for heterogeneous systems. Preliminary conservative simulation results for one of the optimization criteria, show a 1.21 improvement over Load Balancing and a 25.9 improvement over Limited Best Assignment, the two policies that evolved from homogeneous environments. SmartNet achieves these improvements through the implementation of several innovations. It recognizes and capitalizes on the inherent heterogeneity of computers in todays distributed environments; it recognizes and accounts for the underlying non-determinism of the distributed environment; it implements an original partitioning approach, making runtime prediction more accurate and useful; it effectively schedules based on all shared resource usage, including network characteristics; and it uses statistical and filtering techniques, making a greater amount of prediction information available to the scheduling engine. In this paper, the issues associated with automatically managing a heterogeneous environment are reviewed, SmartNets architecture and implementation are described, and performance data is summarized.


Cluster Computing | 2006

A flexible multi-dimensional QoS performance measure framework for distributed heterogeneous systems

Jong Kook Kim; Debra A. Hensgen; Taylor Kidd; Howard Jay Siegel; David St. John; Cynthia E. Irvine; Timothy E. Levin; N. Wayne Porter; Viktor K. Prasanna; Richard F. Freund

When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches.


international symposium on parallel architectures algorithms and networks | 1999

Why the mean is inadequate for accurate scheduling decisions

Taylor Kidd; Debra A. Hensgen

In a distributed environment, the generalized scheduling problem attempts to optimize some performance criteria by assigning tasks to resources and by determining the order in which those tasks will be executed. Although most resource management systems in use today have the goal of maximizing the use of idle processors, several, such as LSF and SmartNet attempt to minimize the time at which the last job, in each set of jobs, completes. They attempt to deliver better quality of service to jobs by using scheduling heuristics that calculate schedules based upon the expected run-times of each job on each machine. This paper analyzes an exhaustive scheduling algorithm that minimizes the time at which the last job completes, if all jobs execute for exactly their expected run-limes. The authors show that if this assumption is violated, that is, if jobs do not execute for exactly their expected run-times, then this algorithm will underestimate the time at which the last job is exacted to finish, sometimes substantially. The authors conclude that an algorithm that uses not only the expected run-times, but also their distributions, can obtain better schedules.


international workshop on quality of service | 1998

SAAM: an integrated network architecture for integrated services

Geoffrey G. Xie; Debra A. Hensgen; Taylor Kidd; John Yarger

The current network architecture is based predominantly on stand-alone routers. It is becoming overtaxed with the introduction of integrated services. In this paper, we propose the SAAM (Server and Agent-based Active network Management) architecture that scales well with integrated services. SAAM relieves individual routers from most routing and network management tasks. Instead, it employs a small number of dedicated servers to perform these tasks on behalf of the routers. In particular, these servers maintain a path information base (PIB), with which network functions, such as QoS routing and re-routing of real-time flows, can be efficiently implemented. We describe a scaleable architecture for organizing the servers as well as a concrete design of the PIB. SAAM has the potential for offering a common platform where multiple network functions-such as routing, resource reservation, network management, accounting and security-can be integrated.


high performance distributed computing | 1999

Passive, domain-independent, end-to-end message passing performance monitoring to support adaptive applications in MSHN

M.C. Schnaidt; Debra A. Hensgen; J. Falby; Taylor Kidd; D. St. John

The Management System for Heterogeneous Networks (MSHN) is a system for managing a set of distributed resources. Integral to MSHN is the maintenance of status information concerning the resources available on those systems. The paper focuses on monitoring the end-to-end performance of message passing. The method used to gather this information is subject to three constraints: (1) the implementation must not require any changes to the operating system; (2) modifications to the application code must be minimized; and (3) the overhead imposed by the information gathering mechanism should not be excessive. We examined eight tools and application components, both commercial and research, that attempt to measure end-to-end message passing performance (W. Hayes-Roth and L. Erman, 1994; J. Kresho, 1997; C. Lee et al., 1998; N. Spring, 1997; R. Wolski et al., 1997).


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

Are CORBA services ready to support resource management middleware for heterogeneous computing

Alpay Duman; Debra A. Hensgen; D. St.John; Taylor Kidd

The goal of this paper is to report our findings as to which CORBA services are ready to support distributed system software in a heterogeneous environment. In particular, we implemented intercommunication between components in our Management System for Heterogeneous Networks (MSHN) using four different CORBA mechanisms: the Static Invocation Interface (SII), the Dynamic Invocation Interface (DII), Untyped Event Services, and Typed Event Services. MSHNs goals are to manage dynamically changing sets of heterogeneous adaptive applications in a heterogeneous environment. We found these mechanisms at various stages of maturity, resulting in some being less useful than others. In addition, we found that the overhead added by CORBA varied from a low of 10.6 milliseconds per service request to a high of 279.1 milliseconds per service request on workstations connected via 100 Mbits/sec Ethernet. We therefore conclude that using CORBA not only substantially decreases the amount of time required to implement distributed system software, but it need not degrade performance.


euromicro workshop on parallel and distributed processing | 2000

A QoS performance measure framework for distributed heterogeneous networks

Jong Kook Kim; Debra A. Hensgen; Taylor Kidd; Howard Jay Siegel; D. St. John; Cynthia E. Irvine; T. Levin; N. W. Porter; Viktor K. Prasanna; Richard F. Freund

Collaboration


Dive into the Taylor Kidd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Viktor K. Prasanna

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Kussow

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar

David St. John

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar

Elaine G. Keith

Science Applications International Corporation

View shared research outputs
Researchain Logo
Decentralizing Knowledge