Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neill Miller is active.

Publication


Featured researches published by Neill Miller.


Journal of Physics: Conference Series | 2006

Monitoring the grid with the Globus Toolkit MDS4

Jennifer M. Schopf; Laura Pearlman; Neill Miller; Carl Kesselman; Ian T. Foster; Mike D'Arcy; Ann L. Chervenak

The Globus Toolkit Monitoring and Discovery System (MDS4) defines and implements mechanisms for service and resource discovery and monitoring in distributed environments. MDS4 is distinguished from previous similar systems by its extensive use of interfaces and behaviors defined in the WS-Resource Framework and WS-Notification specifications, and by its deep integration into essentially every component of the Globus Toolkit. We describe the MDS4 architecture and the Web service interfaces and behaviors that allow users to discover resources and services, monitor resource and service states, receive updates on current status, and visualize monitoring results. We present two current deployments to provide insights into the functionality that can be achieved via the use of these mechanisms.


Lecture Notes in Computer Science | 2003

Implementing Fast and Reusable Datatype Processing

Robert Ross; Neill Miller; William Gropp

Methods for describing structured data are a key aid in application development. The MPI standard defines a system for creating “MPI types” at run time and using these types when passing messages, performing RMA operations, and accessing data in files. Similar capabilities are available in other middleware. Unfortunately many implementations perform poorly when processing these structured data types. This situation leads application developers to avoid these components entirely, instead performing any necessary data processing by hand.


Future Generation Computer Systems | 2014

The Earth System Grid Federation: An open infrastructure for access to distributed geospatial data

Luca Cinquini; Daniel J. Crichton; Chris A. Mattmann; John Harney; Galen M. Shipman; Feiyi Wang; Rachana Ananthakrishnan; Neill Miller; Sebastian Denvil; Mark Morgan; Zed Pobre; Gavin M. Bell; Charles Doutriaux; Robert S. Drach; Dean N. Williams; Philip Kershaw; Stephen Pascoe; Estanislao Gonzalez; Sandro Fiore; Roland Schweitzer

Abstract The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF’s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software stack integrates custom components (for data publishing, searching, user interface, security and messaging), developed collaboratively by the team, with popular application engines (Tomcat, Solr) available from the open source community. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire Fifth Coupled Model Intercomparison Project (CMIP5) output used by the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs). This paper presents ESGF as a successful example of integration of disparate open source technologies into a cohesive, wide functional system, and describes our experience in building and operating a distributed and federated infrastructure to serve the needs of the global climate science community.


international conference on e science | 2006

Monitoring the Earth System Grid with MDS4

Ann L. Chervenak; Jennifer M. Schopf; Laura Pearlman; Mei-Hui Su; Shishir Bharathi; Luca Cinquini; Mike D'Arcy; Neill Miller; David E. Bernholdt

In production Grids for scientific applications, service and resource failures must be detected and addressed quickly. In this paper, we describe the monitoring infrastructure used by the Earth System Grid (ESG) project, a scientific collaboration that supports global climate research. ESG uses the Globus Toolkit Monitoring and Discovery System (MDS4) to monitor its resources. We describe how the MDS4 Index Service collects information about ESG resources and how the MDS4 Trigger Service checks specified failure conditions and notifies system administrators when failures occur. We present monitoring statistics for May 2006 and describe our experiences using MDS4 to monitor ESG resources over the last two years.


Journal of Physics: Conference Series | 2007

Enabling Distributed Petascale Science

Andrew Baranovski; Shishir Bharathi; John Bresnahan; Ann L. Chervenak; Ian T. Foster; Dan Fraser; Timothy Freeman; Dan Gunter; Keith Jackson; Kate Keahey; Carl Kesselman; David E. Konerding; Nick LeRoy; Mike Link; Miron Livny; Neill Miller; Robert Miller; Gene Oleynik; Laura Pearlman; Jennifer M. Schopf; Robert Schuler; Brian Tierney

Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science.


Lecture Notes in Computer Science | 2004

Providing Efficient I/O Redundancy in MPI Environments

William Gropp; Robert B. Ross; Neill Miller

Highly parallel applications often use either highly parallel file systems or large numbers of independent disks. Either approach can provide the high data rates necessary for parallel applications. However, the failure of a single disk or server can render the data useless. Conventional techniques, such as those based on applying erasure correcting codes to each file write, are prohibitively expensive for massively parallel scientific applications because of the granularity of access at which the codes are applied. In this paper we demonstrate a scalable method for recovering from single disk failures that is optimized for typical scientific data sets. This approach exploits coarser-grained (but precise) semantics to reduce the overhead of constructing recovery data and makes use of parallel computation (proportional to the data size and independent of number of processors) to construct data. Experiments are presented showing the efficiency of this approach on a cluster with independent disks, and a technique is described for hiding the creation of redundant data within the MPI-IO implementation.


Grid-Based Problem Solving Environments | 2007

Grid-based Image Registration

William Gropp; Eldad Haber; Stefan Heldmann; David E. Keyes; Neill Miller; Jennifer M. Schopf; Tianzhi Yang

We introduce and discuss preliminary experience with an application that has vast potential to exploit the Grid for social benefit and offers interesting resource assessment and allocation challenges, having real-time aspects: image registration. Image registration is generally formulated as an optimization problem that satisfies constraints, such as coordinate displacements that are affine or volumepreserving or that obey the laws of elasticity. Three-dimensional registration of high-resolution images is computationally complex and justifies parallel implementation. In turn, ensembles of registration tasks exploit concurrency in the simpler sense of job farming.


high performance distributed computing | 2006

Monitoring and Discovery in a Web Services Framework: Functionality and Performance of Globus Toolkit MDS4

Jennifer M. Schopf; Ioan Raicu; Laura Pearlman; Neill Miller; Carl Kesselman; Ian T. Foster; Mike D’Arcy


Archive | 2005

The PVFS2 parallel file system

Robert B. Ross; Robert Latham; Neill Miller; Philip H. Carns; Walter B. Ligon; Pete Wycko


grid computing environments | 2009

Enhancing the earth system grid security infrastructure through single sign-on and autoprovisioning

Frank Siebenlist; Rachana Ananthakrishnan; David E. Bernholdt; Luca Cinquini; Ian T. Foster; Don Middleton; Neill Miller; Dean N. Williams

Collaboration


Dive into the Neill Miller's collaboration.

Top Co-Authors

Avatar

Jennifer M. Schopf

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Laura Pearlman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Robert B. Ross

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ann L. Chervenak

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Carl Kesselman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Luca Cinquini

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David E. Bernholdt

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dean N. Williams

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mike D'Arcy

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge