Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard F. Freund is active.

Publication


Featured researches published by Richard F. Freund.


Journal of Parallel and Distributed Computing | 2001

A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems

Tracy D. Braun; Howard Jay Siegel; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Mixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links, to perform different computationally intensive applications that have diverse computational requirements. HC environments are well suited to meet the computational demands of large, diverse groups of tasks. The problem of optimally mapping (defined as matching and scheduling) these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original study of each heuristic. Therefore, a collection of 11 heuristics from the literature has been selected, adapted, implemented, and analyzed under one set of common assumptions. It is assumed that the heuristics derive a mapping statically (i.e., off-line). It is also assumed that a metatask (i.e., a set of independent, noncommunicating tasks) is being mapped and that the goal is to minimize the total execution time of the metatask. The 11 heuristics examined are Opportunistic Load Balancing, Minimum Execution Time, Minimum Completion Time, Min?min, Max?min, Duplex, Genetic Algorithm, Simulated Annealing, Genetic Simulated Annealing, Tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will out-perform another. The evaluation procedure is specified, the heuristics are defined, and then comparison results are discussed. It is shown that for the cases studied here, the relatively simple Min?min heuristic performs well in comparison to the other techniques.


Journal of Parallel and Distributed Computing | 1999

Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems

Muthucumaru Maheswaran; Shoukat Ali; Howard Jay Siegel; Debra A. Hensgen; Richard F. Freund

Dynamic mapping (matching and scheduling) heuristics for a class of independent tasks using heterogeneous distributed computing systems are studied. Two types of mapping heuristics are considered, immediate mode and batch mode heuristics. Three new heuristics, one for batch mode and two for immediate mode, are introduced as part of this research. Simulation studies are performed to compare these heuristics with some existing ones. In total five immediate mode heuristics and three batch mode heuristics are examined. The immediate mode dynamic heuristics consider, to varying degrees and in different ways, task affinity for different machines and machine ready times. The batch mode dynamic heuristics consider these factors, as well as aging of tasks waiting to execute. The simulation results reveal that the choice of which dynamic mapping heuristic to use in a given heterogeneous environment depends on parameters such as (a) the structure of the heterogeneity among tasks and machines and (b) the arrival rate of the tasks.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

Scheduling resources in multi-user, heterogeneous, computing environments with SmartNet

Richard F. Freund; Michael Gherrity; Stephen L. Ambrosius; Mark Campbell; Mike Halderman; Debra A. Hensgen; Elaine G. Keith; Taylor Kidd; Matt Kussow; John D. Lima; Francesca Mirabile; Lantz Moore; Brad Rust; Howard Jay Siegel

It is increasingly common for computer users to have access to several computers on a network, and hence to be able to execute many of their tasks on any of several computers. The choice of which computers execute which tasks is commonly determined by users based on a knowledge of computer speeds for each task and the current load on each computer. A number of task scheduling systems have been developed that balance the load of the computers on the network, but such systems tend to minimize the idle time of the computers rather than minimize the idle time of the users. The paper focuses on the benefits that can be achieved when the scheduling system considers both the computer availabilities and the performance of each task on each computer. The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment. Results are presented where the operation of hundreds of different networks of computers running thousands of different mixes of tasks are simulated in a batch environment. These results indicate that, for the computer environments simulated, SmartNet outperforms both load balancing and user directed assignments, based on the maximum time users must wait for their tasks to finish.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

A comparison study of static mapping heuristics for a class of meta-tasks on heterogeneous computing systems

Tracy D. Braun; H.J. Siegal; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Heterogeneous computing (HC) environments are well suited to meet the computational demands of large, diverse groups of tasks (i.e., a meta-task). The problem of mapping (defined as matching and scheduling) these tasks onto the machines of an HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original studies of each heuristic. Therefore, a collection of eleven heuristics from the literature has been selected, implemented, and analyzed under one set of common assumptions. The eleven heuristics examined are opportunistic load balancing, user-directed assignment, fast greedy, min-min, max-min, greedy, genetic algorithm, simulated annealing, genetic simulated annealing, tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will outperform another. The evaluation procedure is specified, the heuristics are defined, and then selected results are compared.


Proceedings. Workshop on Heterogeneous Processing | 1992

Augmenting the Optimal Selection Theory for Superconcurrency

Mu-Cheng Wang; Shin-Dug Kim; Mark A. Nichols; Richard F. Freund; Howard Jay Siegel; Wayne G. Nation

An approach for jinding the optimal configuration of heterogeneous computer systems to solve supercomputing problem is presented. Superconcurrency as a form of distributed heterogeneous supercomputing is an approach for matching and managing an optimally configured suite of super-speed machines to minimize the execution time on a given task. The approach performs best when the computational requirements for a given set of tasks are diverse. A supercomputing application task is decomposed into a collection of code segments, where the processing requirement is homogeneous in each code segment. The optimal selection theory has been proposed to choose the optimal configuration of machines for a supercomputing problem. This technique is based on code projiling and analytical benchmarking. Here, the previously presented optimal selection theory approach is augmented in two ways: the performance of code segments on non-optimal machine choices is incorporated and non-uniform &compositions of code segments are considered.


international symposium on parallel architectures algorithms and networks | 1996

SmartNet: a scheduling framework for heterogeneous computing

Richard F. Freund; Taylor Kidd; Debra A. Hensgen; Lantz Moore

SmartNet is a scheduling framework for heterogeneous systems. Preliminary conservative simulation results for one of the optimization criteria, show a 1.21 improvement over Load Balancing and a 25.9 improvement over Limited Best Assignment, the two policies that evolved from homogeneous environments. SmartNet achieves these improvements through the implementation of several innovations. It recognizes and capitalizes on the inherent heterogeneity of computers in todays distributed environments; it recognizes and accounts for the underlying non-determinism of the distributed environment; it implements an original partitioning approach, making runtime prediction more accurate and useful; it effectively schedules based on all shared resource usage, including network characteristics; and it uses statistical and filtering techniques, making a greater amount of prediction information available to the scheduling engine. In this paper, the issues associated with automatically managing a heterogeneous environment are reviewed, SmartNets architecture and implementation are described, and performance data is summarized.


Information Sciences | 1998

Generational scheduling for dynamic task management in heterogeneous computing systems

Brent R. Carter; Daniel W. Watson; Richard F. Freund; Elaine G. Keith; Francesca Mirabile; Howard Jay Siegel

Heterogeneous computing (HC) is the coordinated use of different types of machines, networks, and interfaces in order to maximize performance and/or cost effectiveness. In recent years, research related to HC has addressed one of its most fundamental challenges: how to develop a schedule of tasks on a set of heterogeneous hosts that minimizes the time required to execute the given tasks. The development of such a schedule is made difficult by diverse processing abilities among the hosts, data and precedence dependencies among the tasks, and other factors. This paper outlines a straightforward approach to solving this problem, termed generational scheduling (GS). GS provides fast, efficient matching of tasks to hosts and requires little overhead to implement. This study introduces the GS approach and illustrates its effectiveness in terms of the time to determine schedules and the quality of schedules produced. A communication-inclusive extension of GS is presented to illustrate how GS can be used when the overhead of transferring data produced be some tasks and consumed by others is significant. Finally, to illustrate the effectiveness of GS in a real-world environment, a series of experiments are presented using GS in the SmartNet scheduling framework, developed at US Navys facility at the Naval Command, Control, and Ocean Surveillance Center in San Diego, California.


Cluster Computing | 2006

A flexible multi-dimensional QoS performance measure framework for distributed heterogeneous systems

Jong Kook Kim; Debra A. Hensgen; Taylor Kidd; Howard Jay Siegel; David St. John; Cynthia E. Irvine; Timothy E. Levin; N. Wayne Porter; Viktor K. Prasanna; Richard F. Freund

When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches.


International Journal of Systems Science | 1997

Work-based performance measurement and analysis of virtual heterogeneous machines

Stephen L. Ambrosius; Richard F. Freund; Stephen L. Scott; Howard Jay Siegel

Abstract Presented here is a set of methods and tools developed to provide transportable measurements of performance in heterogeneous networks of machines operating together as a single virtual heterogeneous machine (VHM). The methods are work-based rather than time-based, and yield significant analytic information. A technique for normalizing the measure of useful work performed across a heterogeneous network is proposed and the reasons for using a normalized measure are explored. It is shown that work-based performance measures are better than time-based ones because they may be (1) taken while a task is currently executing on a machine; (2) taken without interrupting production operation of the machine network; (3) used to compare disparate tasks, and (4) used to perform second-order analysis of machine network operation. This set of performance tools has been used to monitor the utilization of high-performance computing networks, provide feedback on algorithm design and determine the veracity of compu...


Journal of Parallel and Distributed Computing | 1995

Evaluation of two programming paradigms for heterogeneous computing

Song Chen; Mary Mehrnoosh Eshaghian; Richard F. Freund; Jerry L. Potter; Ying-Chieh Wu

In this paper, we evaluate two different programming paradigms for heterogeneous computing, Cluster-M and Heterogeneous Associative Computing (HAsC). These paradigms can efficiently support heterogeneous networks by preserving a level of abstraction without containing any architectural details. The paradigms are architecturally independent and scalable for various network and problem sizes. Cluster-M can be applied to both coarse-grained and fine-grained networks. Cluster-M provides an environment for porting heterogeneous tasks onto the machines in a heterogeneous suite such that resource utilization is maximized and the overall execution time is minimized. HAsC models a heterogeneous network as a coarse-grained associative computer. It is designed to optimize the execution of problems where the program size is small compared with the amount of data processed. Unlike other existing heterogeneous orchestration tools which are MIMD based, HAsC is for data-parallel SIMD associative computing. Ease of programming and execution speed are the primary goals of HAsC. We evaluate how these two paradigms can be used together to provide an efficient scheme for heterogeneous programming. Finally, their scalability issues are discussed.

Collaboration


Dive into the Richard F. Freund's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taylor Kidd

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elaine G. Keith

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Mary Mehrnoosh Eshaghian

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Viktor K. Prasanna

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge