Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey S. Vetter is active.

Publication


Featured researches published by Jeffrey S. Vetter.


high performance distributed computing | 1998

Autopilot: adaptive control of distributed applications

Randy L. Ribler; Jeffrey S. Vetter; Huseyin Simitci; Daniel A. Reed

With increasing development of applications for heterogeneous, distributed computing grids, the focus of performance analysis has shifted from a posteriori optimization on homogeneous parallel systems to application tuning for heterogeneous resources with time varying availability. This shift has profound implications for performance instrumentation and analysis techniques. Autopilot is a new infrastructure for dynamic performance tuning of heterogeneous computational grids based on closed loop control. The paper describes the Autopilot model of distributed sensors, actuators, and decision procedures, reports preliminary performance benchmarks, and presents a case study in which the Autopilot library is utilized in the development of an adaptive parallel input/output system.


symposium on frontiers of massively parallel computation | 1995

Falcon: on-line monitoring and steering of large-scale parallel programs

Weiming Gu; Greg Eisenhauer; Eileen Kraemer; Karsten Schwan; John T. Stasko; Jeffrey S. Vetter; Nirupama Mallavarupu

Falcon is a system for on-line monitoring and steering of large-scale parallel programs. The purpose of such program steering is to improve the applications performance or to affect its execution behavior. This paper presents the framework of the Falcon system and its implementation, and then evaluates the performance of the system. A complex sample application, a molecular dynamics simulation program (MD), is used to motivate the research as well as to measure the performance of the Falcon system.<<ETX>>


Computing in Science and Engineering | 2011

Keeneland: Bringing Heterogeneous GPU Computing to the Computational Science Community

Jeffrey S. Vetter; Richard Glassbrook; Jack J. Dongarra; Karsten Schwan; Bruce Loftis; Stephen McNally; Jeremy S. Meredith; James H. Rogers; Philip C. Roth; Kyle Spafford; Sudhakar Yalamanchili

The Keeneland projects goal is to develop and deploy an innovative, GPU-based high-performance computing system for the NSF computational science community.


Sigplan Notices | 1994

An annotated bibliography of interactive program steering

Weiming Gu; Jeffrey S. Vetter; Karsten Schwan

Scientists not only want to analyze data that results fro m super-computations ; they also want to interpret what is happening to the data during super-computations . Researchers want to steer calculations in close-to-real-time ; they want to be able to change parameters, resolutio n or representation, and see the effects . They want t o drive the scientific discovery process ; they want to interact with their data . . .The most common mode of visualization at national supercomputer centers is batch .


ieee international conference on high performance computing data and analytics | 2012

Classifying soft error vulnerabilities in extreme-scale scientific applications using a binary instrumentation tool

Dong Li; Jeffrey S. Vetter; Weikuan Yu

Extreme-scale scientific applications are at a significant risk of being hit by soft errors on supercomputers as the scale of these systems and the component density continues to increase. In order to better understand the specific soft error vulnerabilities in scientific applications, we have built an empirical fault injection and consequence analysis tool - BIFIT -that allows us to evaluate how soft errors impact applications. In particular, BIFIT is designed with capability to inject faults at very specific targets: an arbitrarily-chosen execution point and any specific data structure. We apply BIFIT to three mission-critical scientific applications and investigate the applications vulnerability to soft errors by performing thousands of statistical tests. We, then, classify each applications individual data structures based on their sensitivity to these vulnerabilities, and generalize these classifications across applications. Subsequently, these classifications can be used to apply appropriate resiliency solutions to each data structure within an application. Our study reveals that these scientific applications have a wide range of sensitivities to both the time and the location of a soft error; yet, we are able to identify intrinsic relationships between application vulnerabilities and specific types of data objects. In this regard, BIFIT enables new opportunities for future resiliency research.


international parallel processing symposium | 1997

High performance computational steering of physical simulations

Jeffrey S. Vetter; Karsten Schwan

Computational steering allows researchers to monitor and manage long running, resource intensive applications at runtime. Limited research has addressed high performance computational steering. High performance in computational steering is necessary for three reasons. First, a computational steering system must act intelligently at runtime in order to minimize its perturbation of the target application. Second, monitoring information extracted from the target must be analyzed and forwarded to the user in a timely fashion to allow fast decision making. Finally, steering actions must be executed with low latency to prevent undesirable feedback. The paper describes the use of language constructs, coined ACSL, within a system for computational steering. The steering system interprets ACSL statements and optimizes the requests for steering and monitoring. Specifically, the steering system, called Magellan, utilizes ACSL to intelligently control multithreaded, asynchronous steering servers that cooperatively steer applications. These results compare favorably to our previous Progress steering system.


ieee international conference on high performance computing data and analytics | 2012

Early evaluation of directive-based GPU programming models for productive exascale computing

Seyong Lee; Jeffrey S. Vetter

Graphics Processing Unit (GPU)-based parallel computer architectures have shown increased popularity as a building block for high performance computing, and possibly for future Exascale computing. However, their programming complexity remains as a major hurdle for their widespread adoption. To provide better abstractions for programming GPU architectures, researchers and vendors have proposed several directive-based GPU programming models. These directive-based models provide different levels of abstraction, and required different levels of programming effort to port and optimize applications. Understanding these differences among these new models provides valuable insights on their applicability and performance potential. In this paper, we evaluate existing directive-based models by porting thirteen application kernels from various scientific domains to use CUDA GPUs, which, in turn, allows us to identify important issues in the functionality, scalability, tunability, and debuggability of the existing models. Our evaluation shows that directive-based models can achieve reasonable performance, compared to hand-written GPU codes.


ieee international conference on high performance computing data and analytics | 2012

Aspen: a domain specific language for performance modeling

Kyle Spafford; Jeffrey S. Vetter

We present a new approach to analytical performance modeling using Aspen, a domain specific langauge. Aspen (Abstract Scalable Performance Engineering Notation) fills an important gap in existing performance modeling techniques and is designed to enable rapid exploration of new algorithms and architectures. It includes a formal specification of an applications performance behavior and an abstract machine model. We provide an overview of Aspens features and demonstrate how it can be used to express a performance model for a three dimensional Fast Fourier Transform. We then demonstrate the composability and modularity of Aspen by importing and reusing the FFT model in a molecular dynamics model. We have also created a number of tools that allow scientists to balance application and system factors quickly and accurately.


Concurrency and Computation: Practice and Experience | 1998

Falcon: On‐line monitoring for steering parallel programs

Weiming Gu; Greg Eisenhauer; Karsten Schwan; Jeffrey S. Vetter

Advances in high performance computing, communications and user interfaces enable developers to construct increasingly interactive high performance applications. The Falcon system presented in this paper supports such interactivity by providing runtime libraries, tools and user interfaces that permit the on-line monitoring and steering of large-scale parallel codes. The principal aspects of Falcon described in this paper are its abstractions and tools for capture and analysis of application-specific program information, performed on-line, with controlled latencies and scalable to parallel machines of substantial size. In addition, Falcon provides support for the on-line graphical display of monitoring information, and it allows programs to be steered during their execution, by human users or algorithmically. This paper presents our basic research motivation, outlines the Falcon systems functionality, and includes a detailed evaluation of its performance characteristics in light of its principal contributions. Falcons functionality and performance evaluation are driven by our experiences with large-scale parallel applications being developed with end users in physics and in atmospheric sciences. The sample application highlighted in this paper is a molecular dynamics simulation program (MD) used by physicists to study the statistical mechanics of liquids.


design, automation, and test in europe | 2015

DESTINY: a tool for modeling emerging 3D NVM and eDRAM caches

Matt Poremba; Sparsh Mittal; Dong Li; Jeffrey S. Vetter; Yuan Xie

The continuous drive for performance has pushed the researchers to explore novel memory technologies (e.g. nonvolatile memory) and novel fabrication approaches (e.g. 3D stacking) in the design of caches. However, a comprehensive tool which models both conventional and emerging memory technologies for both 2D and 3D designs has been lacking. We present DESTINY, a microarchitecture-level tool for modeling 3D (and 2D) cache designs using SRAM, embedded DRAM (eDRAM), spin transfer torque RAM (STT-RAM), resistive RAM (ReRAM) and phase change RAM (PCM). DESTINY facilitates design-space exploration across several dimensions, such as optimizing for a target (e.g. latency or area) for a given memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a desired optimization target etc. DESTINY has been validated against industrial cache prototypes. We believe that DESTINY will drive architecture and system-level studies and will be useful for researchers and designers.

Collaboration


Dive into the Jeffrey S. Vetter's collaboration.

Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dong Li

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Greg Eisenhauer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy S. Meredith

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Seyong Lee

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Weiming Gu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Collin McCurdy

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Marin

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kyle Spafford

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge