Holly Dail
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Holly Dail.
IEEE Transactions on Parallel and Distributed Systems | 2003
Francine Berman; Richard Wolski; Henri Casanova; Walfredo Cirne; Holly Dail; Marcio Faerman; Silvia Figueira; Jim Hayes; Graziano Obertelli; Jennifer M. Schopf; Gary Shao; Shava Smallen; Neil Spring; Alan Su; Dmitrii Zagorodnov
Ensembles of distributed, heterogeneous resources, also known as computational grids, have emerged as critical platforms for high-performance and resource-intensive applications. Such platforms provide the potential for applications to aggregate enormous bandwidth, computational power, memory, secondary storage, and other resources during a single execution. However, achieving this performance potential in dynamic, heterogeneous environments is challenging. Recent experience with distributed applications indicates that adaptivity is fundamental to achieving application performance in dynamic grid environments. The AppLeS (Application Level Scheduling) project provides a methodology, application software, and software environments for adaptively scheduling and deploying applications in heterogeneous, multiuser grid environments. We discuss the AppLeS project and outline our findings.
International Journal of Parallel Programming | 2005
Fran Berman; Henri Casanova; Andrew A. Chien; Keith D. Cooper; Holly Dail; Anshuman Dasgupta; W. Deng; Jack J. Dongarra; Lennart Johnsson; Ken Kennedy; Charles Koelbel; Bo Liu; Xin Liu; Anirban Mandal; Gabriel Marin; Mark Mazina; John M. Mellor-Crummey; Celso L. Mendes; A. Olugbile; Jignesh M. Patel; Daniel A. Reed; Zhiao Shi; Otto Sievert; Huaxia Xia; A. YarKhan
The goal of the Grid Application Development Software (GrADS) Project is to provide programming tools and an execution environment to ease program development for the Grid. This paper presents recent extensions to the GrADS software framework: a new approach to scheduling workflow computations, applied to a 3-D image reconstruction application; a simple stop/migrate/restart approach to rescheduling Grid applications, applied to a QR factorization benchmark; and a process-swapping approach to rescheduling, applied to an N-body simulation. Experiments validating these methods were carried out on both the GrADS MacroGrid (a small but functional Grid) and the MicroGrid (a controlled emulation of the Grid).
international parallel and distributed processing symposium | 2002
Ken Kennedy; Mark Mazina; John M. Mellor-Crummey; Keith D. Cooper; Linda Torczon; Francine Berman; Andrew A. Chien; Holly Dail; Otto Sievert; David Sigfredo Angulo; Ian T. Foster; R. Aydt; Daniel A. Reed
This paper describes the program execution framework being developed by the Grid Application Development Software (GrADS) Project. The goal of this framework is to provide good resource allocation for Grid applications and to support adaptive reallocation if performance degrades because of changes in the availability of Grid resources. At the heart of this strategy is the notion of a configurable object program, which contains, in addition to application code, strategies for mapping the application to different collections of resources and a resource selection model that provides an estimate of the performance of the application on a specific collection of Grid resources. This model must be accurate enough to distinguish collections of resources that will deliver good performance from those that will not. The GrADS execution framework also provides a contract monitoring mechanism for interrupting and remapping an application execution when performance falls below acceptable levels.
Journal of Parallel and Distributed Computing | 2003
Holly Dail; Francine Berman; Henri Casanova
In this paper we propose an adaptive scheduling approach designed to improve the performance of parallel applications in Computational Grid environments. A primary contribution of our work is that our design is decoupled, thus providing a separation of the scheduler itself from the application-specific components needed for the scheduling process. As part of the scheduler, we have also developed an application-generic resource selection procedure that effectively and efficiently identifies desirable resources.As test cases for our approach, we selected two applications from the class of iterative, mesh-based applications. We used a prototype of our approach with these applications to perform validation experiments in production Grid environments. Our results show that our scheduler, albeit decoupled, provides significantly better application performance than conventional scheduling strategies. We also show that our scheduler gracefully handles degraded levels of availability of application and Grid resource information. Finally, we demonstrate that the overhead associated with our methodology is reasonable. This work evolved in the context of the Grid Application Development Software Project (GrADS). Our approach has been integrated with other GrADS software tools and, in that context, has been applied to three real-world applications by other members of the project.
international parallel and distributed processing symposium | 2004
Greg Chun; Holly Dail; Henri Casanova; Allan Snavely
Summary form only given. Like all computing platforms, grids are in need of a suite of benchmarks by which they can be evaluated, compared and characterized. As a first step towards this goal, we have developed a set of probes that exercise basic grid operations with the goal of measuring the performance and the performance variability of basic grid operations, as well as the failure rates of these operations. We present measurement data obtained by running our probes on a grid testbed that spans 5 clusters in 3 institutions. These measurements quantify compute times, network transfer times, and Globus middleware overhead. Our results help provide insight into the stability, robustness, and performance of our testbed, and lead us to make some recommendations for future grid development.
Grid resource management | 2004
Holly Dail; Otto Sievert; Fran Berman; Henri Casanova; Asim YarKhan; Sathish S. Vadhiyar; Jack J. Dongarra; Chuang Liu; Lingyun Yang; Dave Angulo; Ian T. Foster
Developing Grid applications is a challenging endeavor that at the moment requires both extensive labor and expertise. The Grid Application Development Software Project (GrADS) provides a system to simplify Grid application development. This system incorporates tools at all stages of the application development and execution cycle. In this chapter we focus on application scheduling, and present the three scheduling approaches developed in GrADS: development of an initial application schedule (launch-time scheduling), modification of the execution platform during execution (rescheduling), and negotiation between multiple applications in the system (metascheduling). These approaches have been developed and evaluated for platforms that consist of distributed networks of shared workstations, and applied to real-world parallel applications.
conference on high performance computing (supercomputing) | 2002
Holly Dail; Henri Casanova; Francine Berman
Program development environments are instrumental in providing users with easy and efficient access to parallel computing platforms. While a number of such environments have been widely accepted and used for traditional HPC systems, there are currently no widely used environments for Grid programming. The goal of the Grid Application Development Software (GrADS) project is to develop a coordinated set of tools, libraries and run-time execution facilities for Grid program development. In this paper, we describe a Grid scheduler component that is integrated as part of the GrADS software system. Traditionally, application-level schedulers (e.g. AppLeS) have been tightly integrated with the application itself and were not easily applied to other applications. Our design is generic: we decouple the scheduler core (the search procedure) from the application-specific (e.g. application performance models) and platform-specific (e.g. collection of resource information) components used by the search procedure. We provide experimental validation of our approach for two representative regular, iterative parallel programs in a variety of real-world Grid testbeds. Our scheduler consistently outperforms static and user-driven scheduling methods.
challenges of large applications in distributed environments | 2004
Huaxia Xia; Holly Dail; Henri Casanova; Andrew A. Chien
Improvements in networking and middleware technology are enabling large-scale grids that aggregate resources over wide-area networks to support applications at unprecedented levels of scale and performance. Unfortunately, existing middleware and tools provide little information to users as to the suitability of a given grid topology for a specific grid application. Instead, users generally use ad-hoc performance models to evaluate mappings of their applications to resource and network topologies. Grid application behavior alone is complex, and adding resource and network behavior makes the situation even worse. As a result, users typically employ nearly blind experimentation to find good deployments of their applications in each new grid environment. Only through actual deployment and execution can a user discovers if the mapping was a good one. Further, even after finding a good configuration, there is no basis to determine if a much better configuration has been missed. This approach slows effective grid application development and deployment. We present a richer methodology for evaluating grid software and diverse grid environments based on the MicroGrid grid online simulator. With the MicroGrid, users, grid researchers, or grid operators can define and simulate arbitrary collections of resources and networks. This allows study of an existing grid testbed under controlled conditions or even to study the efficacy of higher performance environments than are available today. Further, the MicroGrid supports direct execution of grid applications unchanged. These application can be written with MPI, C, C++, Perl, and/or Python and use the Globus middleware. This enables detailed and accurate study of application behavior. This work presents: (1) the first validation of the MicroGrid for studying whole-program performance of MPI grid applications and (2) a demonstration of the MicroGrid as a tool for predicting the performance of applications on a range of grid resources and novel network topologies.
Proceedings 9th Heterogeneous Computing Workshop (HCW 2000) (Cat. No.PR00556) | 2000
Holly Dail; Graziano Obertelli; Francine Berman; Richard Wolski; Andrew S. Grimshaw
Computational grids have become an important and popular computing platform for both scientific and commercial distributed computing communities. However, users of such systems typically find achievement of application execution performance remains challenging. Although grid infrastructures such as Legion and Globus provide basic resource selection functionality, work allocation functionality, and scheduling mechanisms, applications must interpret system performance information in terms of their own requirements in order to develop performance-efficient schedules. We describe a new high-performance scheduler that incorporates dynamic system information, application requirements, and a detailed performance model in order to create performance efficient schedules. While the scheduler is designed to provide improved performance for a magneto-hydrodynamics simulation in the Legion Computational Grid infrastructure, the design is generalizable to other systems and other data-parallel iterative codes. We describe the adaptive performance model, resource selection strategies, and scheduling policies employed by the scheduler. We demonstrate the improvement in application performance achieved by the scheduler in dedicated and shared Legion environments.
Archive | 2001
Ken Kennedy; Mark Mazina; John M. Mellor-Crummey; Ruth A. Aydt; Celso L. Mendes; Holly Dail; Otto Sievert