Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jerry C. Yan is active.

Publication


Featured researches published by Jerry C. Yan.


Software - Practice and Experience | 1995

Performance measurement, visualization and modeling of parallel and distributed programs using the AIMS toolkit

Jerry C. Yan; Sekhar R. Sarukkai; Pankaj Mehra

Writing large‐scale parallel and distributed scientific applications that make optimum use of the multiprocessor is a challenging problem. Typically, computational resources are underused due to performance failures in the application being executed. Performance‐tuning tools are essential for exposing these performance failures and for suggesting ways to improve program performance. In this paper, we first address fundamental issues in building useful performance‐tuning tools and then describe our experience with the AIMS toolkit for tuning parallel and distributed programs on a variety of platforms. AIMS supports source‐code instrumentation, run‐time monitoring, graphical execution profiles, performance indices and automated modeling techniques as ways to expose performance problems of programs. Using several examples representing a broad range of scientific applications, we illustrate AIMS effectiveness in exposing performance problems in parallel and distributed programs.


Lecture Notes in Computer Science | 2000

An Infrastructure for Monitoring and Management in Computational Grids

Abdul Waheed; Warren Smith; Jude George; Jerry C. Yan

We present the design and implementation of an infrastructure that enables monitoring of resources, services, and applications in a computational grid and provides a toolkit to help manage these entities when faults occur. This infrastructure builds on three basic monitoring components: sensors to perform measurements, actuators to perform actions, and an event service to communicate events between remote processes. We describe how we apply our infrastructure to support a grid service and an application: (1) the Globus Metacomputing Directory Service; and (2) a long-running and coarse-grained parameter study application. We use these application to show that our monitoring infrastructure is highly modular, conveniently retargettable, and extensible.


high performance distributed computing | 2000

An evaluation of alternative designs for a grid information service

Warren Smith; Abdul Waheed; David Meyers; Jerry C. Yan

Computational grids consisting of large and diverse sets of distributed resources have recently been adopted by organizations such as NASA and the NSF. One key component of a computational grid is an information services that provides information about resources, services, and applications to users and their tools. This information is required to use a computational grid and therefore should be available in a timely and reliable manner. In this work, we describe the Globus information service, describe how this service is used, analyze its current performance, and perform trace-driven simulations to evaluate alternative implementations of this grid information service. We find that the majority of the transactions with the information service are changes to the data maintained by the service. We also find that of the three servers we evaluate, one of the commercial products provides the best performance for our workload and that the response time of the information service was not improved during the single experiment we performed with data distributed across two servers.


parallel computing | 1996

Analyzing parallel program performance using normalized performance indices and trace transformation techniques

Jerry C. Yan; Sekhar R. Sarukkai

Abstract In this paper we describe how a performance tuning tool-set, AIMS, guides the user towards developing efficient and scalable production-level parallel programs by locating performance improvement opportunities and determining optimization benefits. AIMSs Xisk helps identify potential optimizations by computing various pre-defined normalized performance indices from program traces. Inspection of these index point to specific optimizations that may benefit program performance. After identifying and characterizing performance problems, AIMSs MK can provide quantitative estimates of performance benefits to help the user avoid arduous optimizations that may not lead to expected performance improvements by. MK also helps identify potential pitfalls or benefits of changing any of various system parameters. Based on MKs performance projection, an informed decision regarding the most beneficial program optimizations or upgrades in execution environments can be chosen.


parallel computing | 1998

Constructing Space-Time Views from Fixed Size Trace Files — Getting the Best of Both Worlds

Jerry C. Yan; Melisa A. Schmidt

Abstract The performance data gathered and analyzed by monitoring tools currently available to the super-computing community falls under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can be extremely large even for short executions of simple programs. In this paper, we propose an innovative methodology for performance data gathering and representation. The user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer. Two basic ideas were employed: the use of averages to approximate different execution instances of the same program construct and “formulae “ to represent repetitive sequences associated with communication and control flow (such as message passing and branching). With the help of a few examples, we illustrate the use of these techniques and evaluated the quality of the performance data we collected.


modeling analysis and simulation on computer and telecommunication systems | 1996

Event-based study of the effect of execution environments on parallel program performance

Sekhar R. Sarukkai; Jerry C. Yan

In this paper we seek to demonstrate the importance of studying the effect of changes in execution environment parameters, on parallel applications executed on state-of-the-art multiprocessors. A comprehensive methodology for event-based analysis of program behavior is introduced. This methodology is used to study the performance significance of various system parameters such as processor speed, message-buffer size, buffer copy speed, network bandwidth, communication latency, interrupt overheads and other system parameters. With the help of a few CFD examples, we illustrate the use of our technique in determining suitable parameter values of the execution environment for three applications.


Future Generation Computer Systems | 1999

Parallelization of NAS benchmarks for shared memory multiprocessors

Abdul Waheed; Jerry C. Yan

Abstract This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting the code to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.


measurement and modeling of computer systems | 1994

A comparison of two model-based performance-prediction techniques for message-passing parallel programs

Pankaj Mehra; Catherine H. Schulbach; Jerry C. Yan


Archive | 1997

Performance Data Gathering and Representation from Fixed-Size Statistical Data

Jerry C. Yan; Haoqiang H. Jin; Melisa A. Schmidt; Paul Kutler


Archive | 1994

Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

Jerry C. Yan; Sekhar R. Sarukkai; Pankaj Mehra; Henry Lum

Collaboration


Dive into the Jerry C. Yan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdul Waheed

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge