Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debra A. Hensgen is active.

Publication


Featured researches published by Debra A. Hensgen.


Journal of Parallel and Distributed Computing | 2001

A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems

Tracy D. Braun; Howard Jay Siegel; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Mixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links, to perform different computationally intensive applications that have diverse computational requirements. HC environments are well suited to meet the computational demands of large, diverse groups of tasks. The problem of optimally mapping (defined as matching and scheduling) these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original study of each heuristic. Therefore, a collection of 11 heuristics from the literature has been selected, adapted, implemented, and analyzed under one set of common assumptions. It is assumed that the heuristics derive a mapping statically (i.e., off-line). It is also assumed that a metatask (i.e., a set of independent, noncommunicating tasks) is being mapped and that the goal is to minimize the total execution time of the metatask. The 11 heuristics examined are Opportunistic Load Balancing, Minimum Execution Time, Minimum Completion Time, Min?min, Max?min, Duplex, Genetic Algorithm, Simulated Annealing, Genetic Simulated Annealing, Tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will out-perform another. The evaluation procedure is specified, the heuristics are defined, and then comparison results are discussed. It is shown that for the cases studied here, the relatively simple Min?min heuristic performs well in comparison to the other techniques.


Journal of Parallel and Distributed Computing | 1999

Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems

Muthucumaru Maheswaran; Shoukat Ali; Howard Jay Siegel; Debra A. Hensgen; Richard F. Freund

Dynamic mapping (matching and scheduling) heuristics for a class of independent tasks using heterogeneous distributed computing systems are studied. Two types of mapping heuristics are considered, immediate mode and batch mode heuristics. Three new heuristics, one for batch mode and two for immediate mode, are introduced as part of this research. Simulation studies are performed to compare these heuristics with some existing ones. In total five immediate mode heuristics and three batch mode heuristics are examined. The immediate mode dynamic heuristics consider, to varying degrees and in different ways, task affinity for different machines and machine ready times. The batch mode dynamic heuristics consider these factors, as well as aging of tasks waiting to execute. The simulation results reveal that the choice of which dynamic mapping heuristic to use in a given heterogeneous environment depends on parameters such as (a) the structure of the heterogeneity among tasks and machines and (b) the arrival rate of the tasks.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

Scheduling resources in multi-user, heterogeneous, computing environments with SmartNet

Richard F. Freund; Michael Gherrity; Stephen L. Ambrosius; Mark Campbell; Mike Halderman; Debra A. Hensgen; Elaine G. Keith; Taylor Kidd; Matt Kussow; John D. Lima; Francesca Mirabile; Lantz Moore; Brad Rust; Howard Jay Siegel

It is increasingly common for computer users to have access to several computers on a network, and hence to be able to execute many of their tasks on any of several computers. The choice of which computers execute which tasks is commonly determined by users based on a knowledge of computer speeds for each task and the current load on each computer. A number of task scheduling systems have been developed that balance the load of the computers on the network, but such systems tend to minimize the idle time of the computers rather than minimize the idle time of the users. The paper focuses on the benefits that can be achieved when the scheduling system considers both the computer availabilities and the performance of each task on each computer. The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment. Results are presented where the operation of hundreds of different networks of computers running thousands of different mixes of tasks are simulated in a batch environment. These results indicate that, for the computer environments simulated, SmartNet outperforms both load balancing and user directed assignments, based on the maximum time users must wait for their tasks to finish.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

A comparison study of static mapping heuristics for a class of meta-tasks on heterogeneous computing systems

Tracy D. Braun; H.J. Siegal; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Heterogeneous computing (HC) environments are well suited to meet the computational demands of large, diverse groups of tasks (i.e., a meta-task). The problem of mapping (defined as matching and scheduling) these tasks onto the machines of an HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original studies of each heuristic. Therefore, a collection of eleven heuristics from the literature has been selected, implemented, and analyzed under one set of common assumptions. The eleven heuristics examined are opportunistic load balancing, user-directed assignment, fast greedy, min-min, max-min, greedy, genetic algorithm, simulated annealing, genetic simulated annealing, tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will outperform another. The evaluation procedure is specified, the heuristics are defined, and then selected results are compared.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

The relative performance of various mapping algorithms is independent of sizable variances in run-time predictions

Robert Armstrong; Debra A. Hensgen; Taylor Kidd

The author studies the performance of four mapping algorithms. The four algorithms include two naive ones: opportunistic load balancing (OLB), and limited best assignment (LBA), and two intelligent greedy algorithms: an O(nm) greedy algorithm, and an O(n/sup 2/m) greedy algorithm. All of these algorithms, except OLB, use expected run-times to assign jobs to machines. As expected run-times are rarely deterministic in modern networked and server based systems, he first uses experimentation to determine some plausible run-time distributions. Using these distributions, he next executes simulations to determine how the mapping algorithms perform. Performance comparisons show that the greedy algorithms produce schedules that, when executed, perform better than naive algorithms, even though the exact run-times are not available to the schedulers. He concludes that the use of intelligent mapping algorithms is beneficial, even when the expected time for completion of a job is not deterministic.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

An overview of MSHN: the Management System for Heterogeneous Networks

Debra A. Hensgen; Taylor Kidd; D. St. John; M.C. Schnaidt; Howard Jay Siegel; T.D. Braun; M. Maheswaran; S. Ali; Jong Kook Kim; Cynthia E. Irvine; Timothy E. Levin; R.F. Freund; Matt Kussow; Michael Godfrey; A. Duman; P. Carff; S. Kidd; Viktor K. Prasanna; Prashanth B. Bhat; Ammar H. Alhusaini

The Management System for Heterogeneous Networks (MSHN) is a resource management system for use in heterogeneous environments. This paper describes the goals of MSHN, its architecture, and both completed and ongoing research experiments. MSHNs main goal is to determine the best way to support the execution of many different applications, each with its own quality of service (QoS) requirements, in a distributed, heterogeneous environment. MSHNs architecture consists of seven distributed, potentially replicated components that communicate with one another using CORBA (Common Object Request Broker Architecture). MSHNs experimental investigations include: the accurate, transparent determination of the end-to-end status of resources; the identification of optimization criteria and how non-determinism and the granularity of models affect the performance of various scheduling heuristics that optimize those criteria; the determination of how security should be incorporated between components as well as how to account for security as a QoS attribute; and the identification of problems inherent in application and system characterization.


international symposium on parallel architectures algorithms and networks | 1996

SmartNet: a scheduling framework for heterogeneous computing

Richard F. Freund; Taylor Kidd; Debra A. Hensgen; Lantz Moore

SmartNet is a scheduling framework for heterogeneous systems. Preliminary conservative simulation results for one of the optimization criteria, show a 1.21 improvement over Load Balancing and a 25.9 improvement over Limited Best Assignment, the two policies that evolved from homogeneous environments. SmartNet achieves these improvements through the implementation of several innovations. It recognizes and capitalizes on the inherent heterogeneity of computers in todays distributed environments; it recognizes and accounts for the underlying non-determinism of the distributed environment; it implements an original partitioning approach, making runtime prediction more accurate and useful; it effectively schedules based on all shared resource usage, including network characteristics; and it uses statistical and filtering techniques, making a greater amount of prediction information available to the scheduling engine. In this paper, the issues associated with automatically managing a heterogeneous environment are reviewed, SmartNets architecture and implementation are described, and performance data is summarized.


symposium on frontiers of massively parallel computation | 1992

The concurrent execution of non-communicating programs on SIMD processors

Philip A. Wilsey; Debra A. Hensgen; Nael B. Abu-Ghazaleh; Charles E. Slusher; David Y. Hollinden

This paper explores the use of SIMD (single-instruction multiple-data) (or SIMD-like) hardware to support the efficient interpretation of concurrent, noncommunicating programs. This approach places compiled programs into the local memory space of each distinct processing element (PE). Within each PE, a local program contour is initialized, and the instructions are interpreted in parallel across all of the PEs by control signals emanating from the central control unit. Initial experiments have been conducted with two distinct software architectures (MINTABs and MIPS R2000) on the MasPar MP-1 and two distinct applications (program mutation analysis and Monte Carlo simulation). While these experiments have shown only marginal performance improvement, it appears that, with several minor hardware modifications, SIMD-like hardware can be constructed that will cost-effectively support both SIMD and MIMD (multiple-instruction multiple-data) processing.<<ETX>>


Cluster Computing | 2006

A flexible multi-dimensional QoS performance measure framework for distributed heterogeneous systems

Jong Kook Kim; Debra A. Hensgen; Taylor Kidd; Howard Jay Siegel; David St. John; Cynthia E. Irvine; Timothy E. Levin; N. Wayne Porter; Viktor K. Prasanna; Richard F. Freund

When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches.


international symposium on parallel architectures algorithms and networks | 1999

Why the mean is inadequate for accurate scheduling decisions

Taylor Kidd; Debra A. Hensgen

In a distributed environment, the generalized scheduling problem attempts to optimize some performance criteria by assigning tasks to resources and by determining the order in which those tasks will be executed. Although most resource management systems in use today have the goal of maximizing the use of idle processors, several, such as LSF and SmartNet attempt to minimize the time at which the last job, in each set of jobs, completes. They attempt to deliver better quality of service to jobs by using scheduling heuristics that calculate schedules based upon the expected run-times of each job on each machine. This paper analyzes an exhaustive scheduling algorithm that minimizes the time at which the last job completes, if all jobs execute for exactly their expected run-limes. The authors show that if this assumption is violated, that is, if jobs do not execute for exactly their expected run-times, then this algorithm will underestimate the time at which the last job is exacted to finish, sometimes substantially. The authors conclude that an algorithm that uses not only the expected run-times, but also their distributions, can obtain better schedules.

Collaboration


Dive into the Debra A. Hensgen's collaboration.

Top Co-Authors

Avatar

Taylor Kidd

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Viktor K. Prasanna

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lantz Moore

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge