Uwe Schwiegelshohn
Technical University of Dortmund
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Uwe Schwiegelshohn.
job scheduling strategies for parallel processing | 1997
Dror G. Feitelson; Larry Rudolph; Uwe Schwiegelshohn; Kenneth C. Sevcik; Parkson Wong
The scheduling of jobs on parallel supercomputer is becoming the subject of much research. However, there is concern about the divergence of theory and practice. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the field.
grid computing | 2000
Volker Hamscher; Uwe Schwiegelshohn; Achim Streit; Ramin Yahyapour
In this paper, we discuss typical scheduling structures that occur in computational grids. Scheduling algorithms and selection strategies applicable to these structures are introduced and classified. Simulations were used to evaluate these aspects considering combinations of different Job and Machine Models. Some of the results are presented in this paper and are discussed in qualitative and quantitative way. For hierarchical scheduling, a common scheduling structure, the simulation results confirmed the benefit of Backfill. Unexpected results were achieved as FCFS proves to perform better than Backfill when using a central job-pool.
job scheduling strategies for parallel processing | 2004
Dror G. Feitelson; Larry Rudolph; Uwe Schwiegelshohn
The popularity of research on the scheduling of parallel jobs demands a periodic review of the status of the field. Indeed, several surveys have been written on this topic in the context of parallel supercomputers [17, 20]. The purpose of the present paper is to update that material, and to extend it to include work concerning clusters and the grid.
cluster computing and the grid | 2002
Carsten Ernemann; Volker Hamscher; Uwe Schwiegelshohn; Ramin Yahyapour; Achim Streit
This paper addresses the potential benefit of sharing jobs between independent sites in a grid computing environment. Also the aspect of parallel multi-site job execution on different sites is discussed. To this end, various scheduling algorithms have been simulated for several machine configurations with different workloads which have been derived from real traces. The results showed that a significant improvement in terms of a smaller average response time is achievable. The usage of multi-site applications can additionally improve the results as long as the increase of the execution time due to communication overhead is limited to about 25%.
job scheduling strategies for parallel processing | 1999
Steve J. Chapin; Walfredo Cirne; Dror G. Feitelson; James Patton Jones; Scott T. Leutenegger; Uwe Schwiegelshohn; Warren Smith; David Talby
The evaluation of parallel job schedulers hinges on the workloads used. It is suggested that this be standardized, in terms of both format and content, so as to ease the evaluation and comparison of different systems. The question remains whether this can encompass both traditional parallel systems and metacomputing systems. This paper is based on a panel on this subject that was held at the workshop, and the ensuing discussion; its authors are both the panel members and participants from the audience. Naturally, not all of us agree with all the opinions expressed here...
job scheduling strategies for parallel processing | 1999
Jochen Krallmann; Uwe Schwiegelshohn; Ramin Yahyapour
In this paper we suggest a strategy to design job scheduling systems. To this end, we first split a scheduling system into three components: Scheduling policy, objective function and scheduling algorithm. After discussing the relationship between those components we explain our strategy with the help of a simple example. The main focus of this example is the selection and the evaluation of several scheduling algorithms.
acm symposium on parallel algorithms and architectures | 1994
John Turek; Walter Ludwig; Joel L. Wolf; Lisa Fleischer; Prasoon Tiwari; Jason Glasgow; Uwe Schwiegelshohn; Philip S. Yu
A <italic>parallelizable</italic> (or <italic>malleable</italic>) task is one which can be run on an arbitrary number of processors, with a task execution time that depends on the number of processors allotted to it. Consider a system of <italic>M</italic> independent parallelizable tasks which are to be scheduled without preemption on a parallel computer consisting of <italic>P</italic> identical processors. For each task, the execution time is a known function of the number of processors allotted to it. The goal is to find (1) for each task <italic>i</italic>, an allotment of processors β, and (2) overall, a non-preemptive schedule assigning the tasks to the processors which minimizes the average response time of the tasks. Equivalently, we can minimize the <italic>flow time</italic> which is the sum of the completion times of each of the tasks. In this paper we tackle the problem of finding a schedule with minimum average response time in the special case where each task in the system has sublinear speedup. This natural restriction on the task execution time means simply that the efficiency of a task decrease or remains constant as the number of processors allotted to it increases. The scheduling problem with sublinear speedups has been shown to be <inline-equation> <f> <ty><sc>NP</sc></ty></f> </inline-equation>-complete in the strong sense. We therefore focus on finding a polynomial time algorithm whose solution comes within a fixed multiplicative constant of optimal. In particular, we given algorithm which finds a schedule having a response time that is within 2 times that of the optimal schedule and which runs in O(M(M<supscrpt>2</supscrpt> + P)) time.
Journal of Scheduling | 2000
Uwe Schwiegelshohn; Ramin Yahyapour
This paper introduces a new preemptive algorithm that is well suited for fair on-line scheduling of parallel jobs. Fairness is achieved by selecting job weights to be equal to the resource consumption of the job and by limiting the time span a job can be delayed by other jobs submitted after it. Further, the processing time of a job is not known when the job is released. It is proven that the algorithm achieves a constant competitive ratio for both the makespan and the weighted completion time for the given weight selection. Finally, the algorithm is also experimentally evaluated with the help of workload traces. Copyright
Siam Journal on Scientific and Statistical Computing | 1991
Jürgen Götze; Uwe Schwiegelshohn
This paper presents a square root and division free Givens rotation (SDFG) to be applied to the QR-decomposition (QRD) for solving linear least squares problems on systolic arrays. The SDFG is based on a special kind of number description of the matrix elements and can be executed by mere application of multiplications and additions. Therefore, it is highly suited for the VLSI-implementation of the QRD on systolic arrays. Roundofi error and stability analyses indicate that the SDFG is numerically as stable as known Givens rotation methods.
grid computing | 2011
Juan Manuel Ramírez-Alcaraz; Andrei Tchernykh; Ramin Yahyapour; Uwe Schwiegelshohn; Ariel Quezada-Pina; José Luis González-García; Adán Hirales-Carbajal
We address non-preemptive non-clairvoyant online scheduling of parallel jobs on a Grid. We consider a Grid scheduling model with two stages. At the first stage, jobs are allocated to a suitable Grid site, while at the second stage, local scheduling is independently applied to each site. We analyze allocation strategies depending on the type and amount of information they require. We conduct a comprehensive performance evaluation study using simulation and demonstrate that our strategies perform well with respect to several metrics that reflect both user- and system-centric goals. Unfortunately, user run time estimates and information on local schedules does not help to significantly improve the outcome of the allocation strategies. When examining the overall Grid performance based on real data, we determined that an appropriate distribution of job processor requirements over the Grid has a higher performance than an allocation of jobs based on user run time estimates and information on local schedules. In general, our experiments showed that rather simple schedulers with minimal information requirements can provide a good performance.