Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Good is active.

Publication


Featured researches published by John C. Good.


Scientific Programming | 2005

Pegasus: A framework for mapping complex scientific workflows onto distributed systems

Ewa Deelman; Gurmeet Singh; Mei-Hui Su; Jim Blythe; Yolanda Gil; Carl Kesselman; Gaurang Mehta; Karan Vahi; G. Bruce Berriman; John C. Good; Anastasia C. Laity; Joseph C. Jacob; Daniel S. Katz

This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.


ieee international conference on high performance computing data and analytics | 2008

The cost of doing science on the cloud: the Montage example

Ewa Deelman; Gurmeet Singh; Miron Livny; G. Bruce Berriman; John C. Good

Utility grids such as the Amazon EC2 cloud and Amazon S3 offer computational and storage resources that can be used on-demand for a fee by compute and data-intensive applications. The cost of running an application on such a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs. Using the Amazon cloud fee structure and a real-life astronomy application, we study via simulation the cost performance tradeoffs of different execution and resource provisioning plans. We also study these trade-offs in the context of the storage and communication fees of Amazon S3 when used for long-term application data archival. Our results show that by provisioning the right amount of storage and compute resources, cost can be significantly reduced with no significant impact on application performance.


ieee international conference on escience | 2008

On the Use of Cloud Computing for Scientific Workflows

Christina Hoffa; Gaurang Mehta; Timothy Freeman; Ewa Deelman; Kate Keahey; G. Bruce Berriman; John C. Good

This paper explores the use of cloud computing for scientific workflows, focusing on a widely used astronomy application-Montage. The approach is to evaluate from the point of view of a scientific workflow the tradeoffs between running in a local environment, if such is available, and running in a virtual environment via remote, wide-area network resource access. Our results show that for Montage, a workflow with short job runtimes, the virtual environment can provide good compute time performance but it can suffer from resource scheduling delays and widearea communications.


Publications of the Astronomical Society of the Pacific | 2013

The NASA Exoplanet Archive: Data and Tools for Exoplanet Research

R. L. Akeson; X. Chen; David R. Ciardi; M. Crane; John C. Good; M. Harbut; E. Jackson; S. R. Kane; Anastasia C. Laity; Stephanie Leifer; M. Lynn; D. L. McElroy; M. Papin; Peter Plavchan; Solange V. Ramirez; R. Rey; K. von Braun; M. Wittman; M. Abajian; B. Ali; C. Beichman; A. Beekley; G. B. Berriman; S. Berukoff; G. Bryden; B. Chan; S. Groom; C. Lau; A. N. Payne; M. Regelson

ABSTRACT.We describe the contents and functionality of the NASA Exoplanet Archive, a database and toolset funded by NASA to support astronomers in the exoplanet community. The current content of the database includes interactive tables containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature. Tools provided to work with these data include a transit ephemeris predictor, both for single planets and for observing locations, light curve viewing and normalization utilities, and a periodogram and phased light curve service. The archive can be accessed at http://exoplanetarchive.ipac.caltech.edu.


computational science and engineering | 2009

Montage: a grid portal and software toolkit for science-grade astronomical image mosaicking

Joseph C. Jacob; Daniel S. Katz; G. Bruce Berriman; John C. Good; Anastasia C. Laity; Ewa Deelman; Carl Kesselman; Gurmeet Singh; Mei Hui Su; Thomas A. Prince; Roy Williams

Montage is a portable software toolkit to construct custom, science-grade mosaics that preserve the astrometry and photometry of astronomical sources. The user specifies the dataset, wavelength, sky location, mosaic size, coordinate system, projection, and spatial sampling. Montage supports massive astronomical datasets that may be stored in distributed archives. Montage can be run on both single- and multi-processor computers, including clusters and grids. Standard grid tools are used to access remote data or run Montage on remote computers. This paper describes the architecture, algorithms, performance, and usage of Montage as both a software toolkit and a grid portal.


Proceedings of the 15th ACM Mardi Gras conference on From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities | 2008

Workflow task clustering for best effort systems with Pegasus

Gurmeet Singh; Mei-Hui Su; Karan Vahi; Ewa Deelman; G. Bruce Berriman; John C. Good; Daniel S. Katz; Gaurang Mehta

Many scientific workflows are composed of fine computational granularity tasks, yet they are composed of thousands of them and are data intensive in nature, thus requiring resources such as the TeraGrid to execute efficiently. In order to improve the performance of such applications, we often employ task clustering techniques to increase the computational granularity of workflow tasks. The goal is to minimize the completion time of the workflow by reducing the impact of queue wait times. In this paper, we examine the performance impact of the clustering techniques using the Pegasus workflow management system. Experiments performed using an astronomy workflow on the NCSA TeraGrid cluster show that clustering can achieve a significant reduction in the workflow completion time (up to 97%).


international conference on parallel processing | 2005

A comparison of two methods for building astronomical image mosaics on a grid

Daniel S. Katz; Joseph C. Jacob; Ewa Deelman; Carl Kesselman; Gurmeet Singh; Mei-Hui Su; G.B. Berriman; John C. Good; Anastasia C. Laity; Thomas A. Prince

This paper compares two methods for running an application composed of a set of modules on a grid. The set of modules (collectively called Montage) generates large astronomical image mosaics by composing multiple small images. The workflow that describes a particular run of Montage can be expressed as a directed acyclic graph (DAG), or as a short sequence of parallel (MPI) and sequential programs. In the first case, Pegasus can be used to run the workflow. In the second case, a short shell script that calls each program can be run. In this paper, we discuss the Montage modules, the workflow run for a sample job, and the two methods of actually running the workflow. We examine the run time for each method and compare the portions that differ between the two methods.


acm symposium on applied computing | 2005

The Pegasus portal: web based grid computing

Gurmeet Singh; Ewa Deelman; Gaurang Mehta; Karan Vahi; Mei Hui Su; G. Bruce Berriman; John C. Good; Joseph C. Jacob; Daniel S. Katz; Albert Lazzarini; K. Blackburn; S. Koranda

Pegasus is a planning framework for mapping abstract workflows for execution on the Grid. This paper presents the implementation of a web-based portal for submitting workflows to the Grid using Pegasus. The portal also includes components for generating abstract workflows based on a metadata description of the desired data products and application-specific services. We describe our experiences in using this portal for two Grid applications. A major contribution of our work is in introducing several components that can be useful for Grid portals and hence should be included in Grid portal development toolkits.


Proceedings of SPIE | 2004

Montage: a grid-enabled engine for delivering custom science-grade mosaics on demand

G. B. Berriman; Ewa Deelman; John C. Good; Joseph C. Jacob; Daniel S. Katz; Carl Kesselman; Anastasia C. Laity; Thomas A. Prince; Gurmeet Singh; Mei-Hu Su

This paper describes the design of a grid-enabled version of Montage, an astronomical image mosaic service, suitable for large scale processing of the sky. All the re-projection jobs can be added to a pool of tasks and performed by as many processors as are available, exploiting the parallelization inherent in the Montage architecture. We show how we can describe the Montage application in terms of an abstract workflow so that a planning tool such as Pegasus can derive an executable workflow that can be run in the Grid environment. The execution of the workflow is performed by the workflow manager DAGMan and the associated Condor-G. The grid processing will support tiling of images to a manageable size when the input images can no longer be held in memory. Montage will ultimately run operationally on the Teragrid. We describe science applications of Montage, including its application to science product generation by Spitzer Legacy Program teams and large-scale, all-sky image processing projects.


Scientific Programming | 2007

Optimizing workflow data footprint

Gurmeet Singh; Karan Vahi; Arun Ramakrishnan; Gaurang Mehta; Ewa Deelman; Henan Zhao; Rizos Sakellariou; K. Blackburn; D. A. Brown; S. Fairhurst; David Meyers; G. Bruce Berriman; John C. Good; Daniel S. Katz

In this paper we examine the issue of optimizing disk usage and scheduling large-scale scientific workflows onto distributed resources where the workflows are data-intensive, requiring large amounts of data storage, and the resources have limited storage resources. Our approach is two-fold: we minimize the amount of space a workflow requires during execution by removing data files at runtime when they are no longer needed and we demonstrate that workflows may have to be restructured to reduce the overall data footprint of the workflow. We show the results of our data management and workflow restructuring solutions using a Laser Interferometer Gravitational-Wave Observatory (LIGO) application and an astronomy application, Montage, running on a large-scale production grid-the Open Science Grid. We show that although reducing the data footprint of Montage by 48% can be achieved with dynamic data cleanup techniques, LIGO Scientific Collaboration workflows require additional restructuring to achieve a 56% reduction in data space usage. We also examine the cost of the workflow restructuring in terms of the applications runtime.

Collaboration


Dive into the John C. Good's collaboration.

Top Co-Authors

Avatar

G. Bruce Berriman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anastasia C. Laity

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph C. Jacob

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas A. Prince

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ewa Deelman

Inter-Services Intelligence

View shared research outputs
Top Co-Authors

Avatar

Gurmeet Singh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aidong zhang

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A. Alexov

Space Telescope Science Institute

View shared research outputs
Top Co-Authors

Avatar

Roy Williams

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Serge M. Monkewitz

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge