Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mitchell D. Theys is active.

Publication


Featured researches published by Mitchell D. Theys.


Journal of Parallel and Distributed Computing | 2001

A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems

Tracy D. Braun; Howard Jay Siegel; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Mixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links, to perform different computationally intensive applications that have diverse computational requirements. HC environments are well suited to meet the computational demands of large, diverse groups of tasks. The problem of optimally mapping (defined as matching and scheduling) these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original study of each heuristic. Therefore, a collection of 11 heuristics from the literature has been selected, adapted, implemented, and analyzed under one set of common assumptions. It is assumed that the heuristics derive a mapping statically (i.e., off-line). It is also assumed that a metatask (i.e., a set of independent, noncommunicating tasks) is being mapped and that the goal is to minimize the total execution time of the metatask. The 11 heuristics examined are Opportunistic Load Balancing, Minimum Execution Time, Minimum Completion Time, Min?min, Max?min, Duplex, Genetic Algorithm, Simulated Annealing, Genetic Simulated Annealing, Tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will out-perform another. The evaluation procedure is specified, the heuristics are defined, and then comparison results are discussed. It is shown that for the cases studied here, the relatively simple Min?min heuristic performs well in comparison to the other techniques.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

A comparison study of static mapping heuristics for a class of meta-tasks on heterogeneous computing systems

Tracy D. Braun; H.J. Siegal; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao; Debra A. Hensgen; Richard F. Freund

Heterogeneous computing (HC) environments are well suited to meet the computational demands of large, diverse groups of tasks (i.e., a meta-task). The problem of mapping (defined as matching and scheduling) these tasks onto the machines of an HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original studies of each heuristic. Therefore, a collection of eleven heuristics from the literature has been selected, implemented, and analyzed under one set of common assumptions. The eleven heuristics examined are opportunistic load balancing, user-directed assignment, fast greedy, min-min, max-min, greedy, genetic algorithm, simulated annealing, genetic simulated annealing, tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will outperform another. The evaluation procedure is specified, the heuristics are defined, and then selected results are compared.


symposium on reliable distributed systems | 1998

A taxonomy for describing matching and scheduling heuristics for mixed-machine heterogeneous computing systems

Tracy D. Braun; Howard Jay Siegel; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao

The problem of mapping (defined as matching and scheduling) tasks and communications onto multiple machines and networks in a heterogeneous computing (HC) environment has been shown to be NP-complete, in general, requiring the development of heuristic techniques. Many different types of mapping heuristics have been developed in recent years. However, selecting the best heuristic to use in any given scenario remains a difficult problem. Factors making this selection difficult are discussed. Motivated by these difficulties, a new taxonomy for classifying mapping heuristics for HC environments is proposed (Purdue HC Taxonomy). The taxonomy is defined in three major parts: the models used for applications and communication requests; the models used for target hardware platforms; and the characteristics of mapping heuristics, Each part of the taxonomy is described, with examples given to help clarify the taxonomy. The benefits and uses of this taxonomy are also discussed.


Advances in Computers | 2005

Characterizing Resource Allocation Heuristics for Heterogeneous Computing Systems

Shoukat Ali; Tracy D. Braun; Howard Jay Siegel; Anthony A. Maciejewski; Noah Beck; Ladislau Bölöni; Muthucumaru Maheswaran; Albert Reuther; James P. Robertson; Mitchell D. Theys; Bin Yao

In many distributed computing environments, collections of applications need to be processed using a set of heterogeneous computing (HC) resources to maximize some performance goal. An important research problem in these environments is how to assign resources to applications (matching) and order the execution of the applications (scheduling) so as to maximize some performance criterion without violating any constraints. This process of matching and scheduling is called mapping. To make meaningful comparisons among mapping heuristics, a system designer needs to understand the assumptions made by the heuristics for (1) the model used for the application and communication tasks, (2) the model used for system platforms, and (3) the attributes of the mapping heuristics. This chapter presents a three-part classification scheme ( 3PCS ) for HC systems. The 3PCS is useful for researchers who want to (a) understand a mapper given in the literature, (b) describe their design of a mapper more thoroughly by using a common standard, and (c) select a mapper to match a given real-world environment.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

A mathematical model, heuristic, and simulation study for a basic data staging problem in a heterogeneous networking environment

Min Tan; Mitchell D. Theys; Howard Jay Siegel; Noah Beck; Michael Jurczyk

Data staging is an important data management problem for a distributed heterogeneous networking environment, where each data storage location and intermediate node may have specific data available, storage limitations, and communication links. Sites in the network request data items and each item is associated with a specific deadline and priority. It is assumed that not all requests can be satisfied by their deadline. The work concentrates on solving a basic version of the data staging problem in which all parameter values for the communication system and the data request information represent the best known information collected so far and stay fixed throughout the scheduling process. A mathematical model for the basic data staging problem is introduced. Then, a multiple-source shortest-path algorithm based heuristic for finding a suboptimal schedule of the communication steps for data staging is presented. A simulation study is provided, which evaluates the performance of the proposed heuristic. The results show the advantages of the proposed heuristic over two random based scheduling techniques. This research, based on the simplified static model, serves as a necessary step toward solving the more realistic and complicated version of the data staging problem involving dynamic scheduling, fault tolerance, and determining where to stage data.


IEEE Transactions on Education | 2003

Computer engineering curriculum in the new millennium

Andrew D. McGettrick; Mitchell D. Theys; David L. Soldan; Pradip K. Srimani

Currently there is a joint activity (referred to as Computing Curricula 2001, shortened to CC2001) involving the Association for Computing Machinery and the IEEE Computer Society, which is producing curriculum guidance for the broad area of computing. Within this activity, a volume on computer engineering is being developed. This volume addresses the important area of the design and development of computers and computer-based systems. Current curricula must be capable of evolving to meet the more immediate needs of students and industry. The purpose of this paper is to look at areas of future development in computer engineering in the next ten years (2013) and beyond and to consider the work of the Computer Engineering volume of CC2001 in this context.


international symposium on parallel architectures algorithms and networks | 1996

The PASM project: a study of reconfigurable parallel computing

Howard Jay Siegel; Tracy D. Braun; Henry G. Dietz; Mark Bernd Kulaczewski; Muthucumaru Maheswaran; Pierre H. Pero; Janet M. Siegel; John John E. So; Min Tan; Mitchell D. Theys; Lee Wang

PASM is a concept for a parallel processing system that allows experimentation with different architectural design alternatives. PASM is dynamically reconfigurable along three dimensions: partitionability into independent or communicating submachines, variable interprocessor connections, and mixed-mode SIMD/MIMD parallelism. With mixed-mode parallelism, a program can switch between SIMD (synchronous) and MIMD (asynchronous) parallelism at instruction-level granularity, allowing the use of both modes in a single machine. The PASM concept is presented, showing the ways in which reconfiguration can be accomplished. Trade-offs among SIMD/MIMD, and mixed-mode parallelism are explored. The small-scale PASM prototype with 16 processing elements is described. The ELP mixed-mode programming language used on the prototype is discussed. An example of a prototype-based study that demonstrates the potential of mixed-mode parallelism is given.


international conference on parallel processing | 1997

Background compensation and an active-camera motion tracking algorithm

Rohit Gupta; Mitchell D. Theys; Howard Jay Siegel

Motion tracking using an active camera is a very computationally complex problem. Existing serial algorithms have provided frame rates that are much lower than those desired, mainly because of the lack of computational resources. Parallel computers are well suited to image processing tasks and can provide the computational power that is required for real-time motion tracking algorithms. This paper develops a parallel implementation of a known serial motion tracking algorithm, with the goal of achieving greater than real-time frame rates, and to study the effects of data layout, choice of parallel mode of execution, and machine size on the execution time of this algorithm. A distinguishing feature of this application study is that the portion of each image frame that is relevant changes from one frame to the next based on the camera motion. This impacts the effect of the chosen data layout on the needed inter-processor data transfers and the way in which work is distributed among the processors. Experiments were performed to determine for which image sizes and number of processors which data layout would perform better. The parallel computers used in this study are the MasPar MP-1, Intel Paragon, and PASM. Different modes are examined and it is determined that mixed mode is faster than SIMD or MIMD implementations.


Frontiers in Education | 2003

Lessons learned from teaching computer architecture to computer science students

Mitchell D. Theys; Patrick A. Troy

Computer science students require more detail about computer architecture than a black box approach can provide. Teaching the appropriate level of detail and assuring that students understand why the subject is taught are nontrivial tasks. In the Computer Science Department at the University of Illinois at Chicago the approach taken is to present the material from the typical three course computer architecture sequence as a two course sequence. In addition, a variety of simulators are utilized to strengthen the material and help control the topic flow. The simulators used include a programmable logic array software package, a MIPS assembly simulator, and a locally created control code simulator. Teaching the two course sequence has been proven to be challenging. This paper presents lessons learned concerning: (1) the level of coverage required; (2) the simulators used, (3) how to maintain topic flow; and (4) future plans for improving the sequence.


Journal of Parallel and Distributed Computing | 2001

What Are the Top Ten Most Influential Parallel and Distributed Processing Concepts of the Past Millenium

Mitchell D. Theys; Shoukat Ali; Howard Jay Siegel; K. Mani Chandy; Kai Hwang; Ken Kennedy; Lui Sha; Kang G. Shin; Marc Snir; Larry Snyder; Thomas Lawrence Sterling

This is a report on a panel titled “What are the top ten most influential parallel and distributed processing concepts of the last millennium?” that was held at the IEEE Computer Society sponsored “14th International Parallel and Distributed Processing Symposium (IPDPS 2000).” The panelists were chosen to represent a variety of perspectives and technical areas. After the panelists had presented their choices for the top ten, an open discussion was held among the audience and panelists. At the end of the discussion, a ballot was distributed for the audience to vote on the top ten concepts (in arbitrary order). The voting identified the following ten most influential parallel and distributed processing concepts of the last millennium: (1) Amdahls law and scalability, (2) Arpanet and Internet, (3) pipelining, (4) divide and conquer approach, (5) multiprogramming, (6) synchronization (including semaphores), (7) load balancing, (8) message passing and packet switching, (9) cluster computing, and (10) multithreaded (lightweight) program execution.

Collaboration


Dive into the Mitchell D. Theys's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noah Beck

Advanced Micro Devices

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert Reuther

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ladislau Bölöni

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge