Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Geisler is active.

Publication


Featured researches published by Jonathan Geisler.


Journal of Parallel and Distributed Computing | 1997

Managing Multiple Communication Methods in High-Performance Networked Computing Systems

Ian T. Foster; Jonathan Geisler; Carl Kesselman; Steven Tuecke

Modern networked computing environments and applications often require?or can benefit from?the use of multiple communication substrates, transport mechanisms, and protocols, chosen according to where communication is directed, what is communicated, or when communication is performed. We propose techniques that allow multiple communication methods to be supported transparently in a single application, with either automatic or user-specified selection criteria guiding the methods used for each communication. We explain how communication link and remote service request mechanisms facilitate the specification and implementation of multimethod communication. These mechanisms have been implemented in the Nexus multithreaded runtime system, and we use this system to illustrate solutions to various problems that arise when multimethod communication is implemented. We also illustrate the application of our techniques by describing a multimethod, multithreaded implementation of the Message Passing Interface (MPI) standard, constructed by integrating Nexus with the Argonne MPICH library. Finally, we present the results of experimental studies that reveal performance characteristics of multimethod communication, the Nexus-based MPI implementation, and a large scientific application running in a heterogeneous networked environment.


high performance distributed computing | 1996

Software infrastructure for the I-WAY high-performance distributed computing experiment

Ian T. Foster; Jonathan Geisler; Bill Nickless; Warren Smith; Steven Tuecke

High speed wide area networks are expected to enable innovative applications that integrate geographically distributed, high performance computing, database, graphics, and networking resources. However, there is as yet little understanding of the higher level services required to support these applications, or of the techniques required to implement these services in a scalable, secure manner. We report on a large scale prototyping effort that has yielded some insights into these issues. Building on the hardware base provided by the I-WAY, a national scale asynchronous transfer mode (ATM) network, we developed an integrated management and application programming system, called I-Soft. This system was deployed at most of the 17 I-WAY sites and used by many of the 60 applications demonstrated on the I-WAY network. We describe the I-Soft design and report on lessons learned from application experiments.


parallel computing | 1998

Wide-area implementation of the message passing interface

Ian T. Foster; Jonathan Geisler; William Gropp; Nicholas T. Karonis; Ewing L. Lusk; George K. Thiruvathukal; Steven Tuecke

The Message Passing Interface (MPI) can be used as a portable, high-performance programming model for wide-area computing systems. The wide-area environment introduces challenging problems for the MPI implementor, due to the heterogeneity of both the underlying physical infrastructure and the software environment at different sites. In this article, we describe an MPI implementation that incorporates solutions to these problems. This implementation has been constructed by extending the Argonne MPICH implementation of MPI to use communication services provided by the Nexus communication library and authentication, resource allocation, process creation/management, and information services provided by the I-Soft system (initially) and the Globus metacomputing toolkit (work in progress). Nexus provides multimethod communication mechanisms that allow multiple communication methods to be used in a single computation with a uniform interface; I-Soft and Globus provided standard authentication, resource management, and process management mechanisms. We describe how these various mechanisms are supported in the Nexus implementation of MPI and present performance results for this implementation on multicomputers and networked systems. We also discuss how more advanced services provided by the Globus metacomputing toolkit are being used to construct a second-generation wide-area MPI.


high performance distributed computing | 2002

Using kernel couplings to predict parallel application performance

Valerie E. Taylor; Xingfu Wu; Jonathan Geisler; Rick Stevens

Performance models provide significant insight into the performance relationships between an application and the system used for execution. The major obstacle to developing performance models is the lack of knowledge about the performance relationships between the different functions that compose an application. This paper addresses the issue by using a coupling parameter, which quantifies the interaction between kernels, to develop performance predictions. The results, using three NAS parallel application benchmarks, indicate that the predictions using the coupling parameter were greatly improved over a traditional technique of summing the execution times of the individual kernels in an application. In one case the coupling predictor had less than 1% relative error in contrast the summation methodology that had over 20% relative error. Further, as the problem size and number of processors scale, the coupling values go through a finite number of major value changes that is dependent on the memory subsystem of the processor architecture.


Proceedings Third Annual International Workshop on Active Middleware Services | 2001

Prophesy: automating the modeling process

Valerie E. Taylor; Xingfu Xu; Jonathan Geisler; Xin Li; Zhiling Lan

Performance models provide significant insight into the performance relationships between an application and the system, either parallel or distributed, used for execution. The development of models often requires significant time, sometimes in the range of months, to develop; this is especially the case for detailed models. This paper presents our approach to reducing the time required for model development. We present the concept of an automated model builder within the Prophesy infrastructure, which also includes automated instrumentation and extensive databases for archiving the performance data. In particular, we focus on the automation of the development of analytical performance models. The concepts include the automation of some well-established techniques, such as curve fitting, and a new technique that develops models as a composition of other models of core components or kernels in the application.


high performance distributed computing | 2000

Prophesy: an infrastructure for analyzing and modeling the performance of parallel and distributed applications

Valerie E. Taylor; Xingfu Wu; Jonathan Geisler; Xin Li; Zhiling Lan; Rick Stevens; Mark Hereld; Ivan R. Judson

Efficient execution of applications requires insight into how the system features impact the performance of the application. For distributed systems, the task of gaining this insight is complicated by the complexity of the system features. This insight generally results from significant experimental analysis and possibly the development of performance models. This paper presents the Prophesy project, an infrastructure that aids in gaining this needed insight based upon experience. The core component of Prophesy is a relational database that allows for the recording of performance data, system features and application details.


Proceedings. Second MPI Developer's Conference | 1996

MPI on the I-WAY: a wide-area, multimethod implementation of the Message Passing Interface

Ian T. Foster; Jonathan Geisler; Steven Tuecke

High-speed wide-area networks enable innovative applications that integrate geographically distributed computing, database, graphics, and networking resources. The Message Passing Interface (MPI) can be used as a portable, high-performance programming model for such systems. However, the wide-area environment introduces challenging problems for the MPI implementor, because of the heterogeneity of both the underlying physical infrastructure and the authentication and software environment at different sites. We describe an MPI implementation that incorporates solutions to these problems. This implementation, which was developed for the I-WAY distributed-computing experiment, was constructed by layering MPICH on the Nexus multithreaded runtime system. Nexus provides automatic configuration mechanisms that can be used to select and configure authentication, process creation, and communication mechanisms in heterogeneous systems.


conference on high performance computing (supercomputing) | 1996

Multimethod Communication for High-Performance Metacomputing Applications

Ian T. Foster; Jonathan Geisler; Carl Kesselman; Steven Tuecke

Metacomputing systems use high-speed networks to connect supercomputers, mass storage systems, scientific instruments, and display devices with the objective of enabling parallel applications to access geographically distributed computing resources. However, experience shows that high performance often can be achieved only if applications can integrate diverse communication substrates, transport mechanisms, and protocols, chosen according to where communication is directed, what is communicated, or when communication is performed. In this article, we describe a software architecture that addresses this requirement. This architecture allows multiple communication methods to be supported transparently in a single application, with either automatic or user-specified selection criteria guiding the methods used for each communication. We describe an implementation of this architecture, based on the Nexus communication library, and use this implementation to evaluate performance issues. The implementation supported a wide variety of applications in the I-WAY metacomputing experiment at Supercomputing 95; we use one of these applications to provide a quantitative demonstration of the advantages of multimethod communication in a heterogeneous networked environment.Metacomputing systems use high-speed networks to connect supercomputers, mass storage systems, scientific instruments, and display devices with the objective of enabling parallel applications to utilize geographically distributed computing resources. However, experience shows that high performance can often be achieved only if applications can integrate diverse communication substrates, transport mechanisms, and protocols, chosen according to where communication is directed, what is communicated, or when communication is performed. In this paper, we describe a software architecture that addresses this requirement. This architecture allows multiple communication methods to be supported transparently in a single application, with either automatic or user-specified selection criteria guiding the methods used for each communication. We describe an implementation of this architecture, based on the Nexus communication library, and use this implementation to evaluate performance issues. This implementation was used to support a wide variety of applications in the I-WAY metacomputing experiment at Supercomputing~95; we use one of these applications to provide a quantitative demonstration of the advantages of multimethod communication in a heterogeneous networked environment.


international parallel and distributed processing symposium | 2004

Isocoupling: reusing kernel coupling values to predict the performance of parallel applications

Xingfu Wu; Valerie E. Taylor; Jonathan Geisler; Rick Stevens

Summary form only given. Kernel coupling quantifies the interaction between adjacent and chains of kernels in an application. A kernel can be a loop, procedure or file. In our previous work, we used the kernel coupling values to identify how to combine the execution times of the individual kernels that compose the application to predict the execution time of the full application. The results of this previous work using the NAS Parallel Benchmark SP demonstrated that the use of coupling values resulted in very good predictions with average errors in the range of only 1.18% in contrast to simply summing the execution times of the kernels that resulted in average errors in the range of 20.54%. The major concern with the coupling values is the fact that values are needed for each different problem size, number of processors and machine. We explore the ability to reuse coupling values. In particular, we explore the reuse in terms of the three dimensional space consisting of the following axes: number of processors, problem size and system architecture. The experimental results indicate that when considering parallel systems, with increasing number of processors and problem sizes, we found clear transitions with the coupling values resulting in the ability to reuse values. Further, reusing coupling values is feasible on classes of systems such as clusters, distributed shared memory and other distributed memory systems.


Journal of Parallel and Distributed Computing | 2002

Performance Coupling

Jonathan Geisler; Valerie E. Taylor

Traditional performance optimization techniques have focused on finding the kernel in an application that is the most time consuming and attempting to optimize it. In this paper, we focus on an optimization technique with a more global perspective of the application. In particular, we present a methodology for measuring the interaction, or coupling, between kernels within an application and describe how the measurements can be used to improve the performance of scientific applications. We discuss four case studies to demonstrate the use of this methodology. The first study involves the Conjugate Gradient Benchmark from the NAS Parallel Benchmarks. The coupling measurement aided in the development of a new hybrid data structure and corresponding algorithm that slightly increased the performance of the program. The second study involves the Block Tridiagonal NAS Parallel Benchmark, for which the coupling parameter aided in revising the program to reduce the level-two cache misses by 14%. Next, we introduce improvements to an application in the SpecJVM benchmark suite resulting in 41% reduction in level-one cache misses. Lastly, we present results from the Seis application from the SPEChpc Benchmarks to illustrate the coupling parameters that may result from large-scale scientific applications.

Collaboration


Dive into the Jonathan Geisler's collaboration.

Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick Stevens

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bill Nickless

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Carl Kesselman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Warren Smith

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Xin Li

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Zhiling Lan

Illinois Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge