Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hashim H. Mohamed is active.

Publication


Featured researches published by Hashim H. Mohamed.


international conference on cluster computing | 2004

An evaluation of the close-to-files processor and data co-allocation policy in multiclusters

Hashim H. Mohamed; Dick H. J. Epema

In multicluster systems, and more generally, in grids, jobs may require coallocation, i.e., the simultaneous allocation of resources such as processors and input files in multiple clusters. While such jobs may have reduced runtimes because they have access to more resources, waiting for processors in multiple clusters and for the input files to become available in the right locations may introduce inefficiencies. In previous work, we have studied through simulations only processor coallocation. Here, we extend this work with an analysis of the performance in a real testbed of our prototype processor and data coallocator with the close-to-files (CF) job-placement algorithm. CF tries to place job components on clusters with enough idle processors which are close to the sites where the input files reside. We present a comparison of the performance of CF and the worst-fit job-placement algorithm, with and without file replication, achieved with our prototype. Our most important findings are that CF with replication works best, and that the utilization in our testbed can be driven to about 80%.


cluster computing and the grid | 2005

Experiences with the KOALA co-allocating scheduler in multiclusters

Hashim H. Mohamed; Dick H. J. Epema

In multicluster systems, and more generally, in grids, jobs may require co-allocation, i.e., the simultaneous allocation of resources such as processors and input files in multiple clusters. While such jobs may have reduced runtimes because they have access to more resources, waiting for processors in multiple clusters and for the input files to become available in the right locations, may introduce inefficiencies. Moreover, as single jobs now have to rely on multiple resource managers, co-allocation introduces reliability problems. In this paper, we present two additions to the original design of our KOALA co-allocating scheduler (different priority levels of jobs and incrementally claiming processors), and we report on our experiences with KOALA in our multicluster testbed while it was unstable.


international conference on cluster computing | 2007

Scheduling malleable applications in multicluster systems

Jérémy Buisson; Omer Ozan Sonmez; Hashim H. Mohamed; Wouter Lammers; Dick H. J. Epema

In large-scale distributed execution environments such as multicluster systems and grids, resource availability may vary due to resource failures and because resources may be added to or withdrawn from such environments at any time. In addition, single sites in such systems may have to deal with workloads originating from both local users and from many other sources. As a result, application malleability, that is, the property of applications to deal with a varying amount of resources during their execution, may be very beneficial for performance. In this paper we present the design of the support of and scheduling policies for malleability in our Koala multicluster scheduler with the help of our Dynaco framework for application malleability. In addition, we show the results of experiments with scheduling malleable workloads with Koala in our DAS multicluster testbed.


grid computing | 2005

The design and implementation of the KOALA co-allocating grid scheduler

Hashim H. Mohamed; Dick H. J. Epema

In multicluster systems, and more generally, in grids, jobs may require co-allocation, i.e., the simultaneous allocation of resources such as processors and input .les in multiple clusters. While such jobs may have reduced runtimes because they have access to more resources, waiting for processors in multiple clusters and for the input .les to become available in the right locations, may introduce ine.ciencies. In this paper we present the design of KOALA, a prototype for processor and data co-allocation that tries to minimize these ine.ciencies through the use of its Close-to-Files placement policy and its Incremental Claiming Policy. The latter policy tries to solve the problem of a lack of support for reservation by local resource managers.


IEEE Transactions on Parallel and Distributed Systems | 2010

On the Benefit of Processor Coallocation in Multicluster Grid Systems

Omer Ozan Sonmez; Hashim H. Mohamed; Dick H. J. Epema

In multicluster grid systems, parallel applications may benefit from processor coallocation, that is, the simultaneous allocation of processors in multiple clusters. Although coallocation allows the allocation of more processors than available in a single cluster, it may severely increase the execution time of applications due to the relatively slow wide-area communication. The aim of this paper is to investigate the benefit of coallocation in multicluster grid systems, despite this drawback. To this end, we have conducted experiments in a real multicluster grid environment, as well as in a simulated environment, and we evaluate the performance of coallocation for various applications that range from computation-intensive to communication-intensive and for various system load settings. In addition, we compare the performance of scheduling policies that are specifically designed for coallocation. We demonstrate that considering latency in the resource selection phase improves the performance of coallocation, especially for communication-intensive parallel applications.


cluster computing and the grid | 2009

Scheduling Strategies for Cycle Scavenging in Multicluster Grid Systems

Omer Ozan Sonmez; Bart Grundeken; Hashim H. Mohamed; Alexandru Iosup; Dick H. J. Epema

The use of todays multicluster grids exhibits periods of submission bursts with periods of normal use and even of idleness. To avoid resource contention, many users employ observational scheduling, that is, they postpone the submission of relatively low-priority jobs until a cluster becomes (largely) idle. However, observational scheduling leads to resource contention when several such users crowd the same idle cluster. Moreover, this job execution model either delays the execution of more important jobs, or requires extensive administrative support for job and user priorities. Instead, in this work we investigate the use of cycle scavenging to run jobs on grid resources politely yet efficiently, and with an acceptable administrative cost. We design a two-level cycle scavenging scheduling architecture that runs unobtrusively alongside regular grid scheduling. We equip this scheduler with two novel cycle scavenging scheduling policies that enforce fair resource sharing among competing cycle scavenging users. We show through experiments with real and synthetic applications in a real multicluster grid that the proposed architecture can execute jobs politely yet efficiently.


CoreGRID | 2007

Virtual Domain Sharing in e-Science based on Usage Service Level Agreements

Catalin Dumitrescu; Alexandru Iosup; Omer Ozan Sonmez; Hashim H. Mohamed; Dick H. J. Epema

Today’s Grids, Peer-to-Peer infrastructures or any large computing collaborations are managed as individual virtual domains (VDs) that focus on their specific problems. However, the research world is starting to shift towards world-wide collaborations and much bigger problems. For this trend to realize, the already existing collection of many resources and services needs to be shared across owning VDs in secure and efficient ways, and at the least administrative costs. In this paper we identify the requirements for and propose a specific solution based on usage service level agreements (uSLAs) for this problem of VD sharing. Further, we propose an integrated architecture that provides uSLA-based access to resources, supports the recurrent delegation of usage rights, and provides faulttolerant resource co-allocation.


Future Generation Grids | 2006

Co-Allocation in Grids: Experiences and Issues

Anca I. D. Bucur; Dick H. J. Epema; Hashim H. Mohamed

Jobs submitted to a grid may require more resources than those available at any time in any single subsystem making up a grid. Therefore, grid schedulers may employ co-allocation, that is, the simultaneous allocation of possibly multiple resources in multiple subsystems to a single job. Over the last few years we have done extensive simulations of processor co-allocation, and we have built a grid scheduler called KOALA that employs data and processor co-allocation. In this paper we summarize our experiences with co-allocation, and we review some the main issues that still remain before co-allocation can be considered an accepted solution in future-generation grids.


Concurrency and Computation: Practice and Experience | 2008

KOALA: a co-allocating grid scheduler

Hashim H. Mohamed; Dick H. J. Epema


CoreGRID integration workshop | 2006

Simulating Grid Schedulers with Deadlines and Co- Allocation

Alexis Ballier; Eddy Caron; Dick H. J. Epema; Hashim H. Mohamed

Collaboration


Dive into the Hashim H. Mohamed's collaboration.

Top Co-Authors

Avatar

Dick H. J. Epema

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Omer Ozan Sonmez

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandru Iosup

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bart Grundeken

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wouter Lammers

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ioan Raicu

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matei Ripeanu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Nicolae Tapus

Politehnica University of Bucharest

View shared research outputs
Researchain Logo
Decentralizing Knowledge