Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Eisenhauer is active.

Publication


Featured researches published by Greg Eisenhauer.


symposium on frontiers of massively parallel computation | 1995

Falcon: on-line monitoring and steering of large-scale parallel programs

Weiming Gu; Greg Eisenhauer; Eileen Kraemer; Karsten Schwan; John T. Stasko; Jeffrey S. Vetter; Nirupama Mallavarupu

Falcon is a system for on-line monitoring and steering of large-scale parallel programs. The purpose of such program steering is to improve the applications performance or to affect its execution behavior. This paper presents the framework of the Falcon system and its implementation, and then evaluates the performance of the system. A complex sample application, a molecular dynamics simulation program (MD), is used to motivate the research as well as to measure the performance of the Falcon system.<<ETX>>


international conference on autonomic computing | 2011

A flexible architecture integrating monitoring and analytics for managing large-scale data centers

Chengwei Wang; Karsten Schwan; Vanish Talwar; Greg Eisenhauer; Liting Hu; Matthew Wolf

To effectively manage large-scale data centers and utility clouds, operators must understand current system and application behaviors. This requires continuous, real-time monitoring along with on-line analysis of the data captured by the monitoring system, i.e., integrated monitoring and analytics -- Monalytics [28]. A key challenge with such integration is to balance the costs incurred and associated delays, against the benefits attained from identifying and reacting to, in a timely fashion, undesirable or non-performing system states. This paper presents a novel, flexible architecture for Monalytics in which such trade-offs are easily made by dynamically constructing software overlays called Distributed Computation Graphs (DCGs) to implement desired analytics functions. The prototype of Monalytics implementing this flexible architecture is evaluated with motivating use cases in small scale data center experiments, and a series of analytical models is used to understand the above trade-offs at large scales. Results show that the approach provides the flexibility needed to meet the demands of autonomic management at large scale with considerably better performance/cost than traditional and brute force solutions.


measurement and modeling of computer systems | 1998

An object-based infrastructure for program monitoring and steering

Greg Eisenhauer; Karsten Schwan

Program monitoring and steering systems can provide invaluable insight into the behavior of complex parallel and distributed applications. But the traditional event-streambased approach to program monitoring does not scale well with increasing complexity. This paper introduces the Mirror Object Model, a new approach for program monitoring and steering systems. This approach provides a higher-level object-based abstraction that links the producer and the consumer of data and provides a seamless model which integrates monitoring and steering computation. We also introduce the Mirror Object Steering System (MOSS), an implementation of the Mirror Object Model based on CORBAstyle objects. This paper demonstrates the advantages of MOSS over traditional event-stream-based monitoring systems in handling complex situations. Additionally, we show that the additional functionality of MOSS can be achieved without signi cant performance penalty.


Concurrency and Computation: Practice and Experience | 1998

Falcon: On‐line monitoring for steering parallel programs

Weiming Gu; Greg Eisenhauer; Karsten Schwan; Jeffrey S. Vetter

Advances in high performance computing, communications and user interfaces enable developers to construct increasingly interactive high performance applications. The Falcon system presented in this paper supports such interactivity by providing runtime libraries, tools and user interfaces that permit the on-line monitoring and steering of large-scale parallel codes. The principal aspects of Falcon described in this paper are its abstractions and tools for capture and analysis of application-specific program information, performed on-line, with controlled latencies and scalable to parallel machines of substantial size. In addition, Falcon provides support for the on-line graphical display of monitoring information, and it allows programs to be steered during their execution, by human users or algorithmically. This paper presents our basic research motivation, outlines the Falcon systems functionality, and includes a detailed evaluation of its performance characteristics in light of its principal contributions. Falcons functionality and performance evaluation are driven by our experiences with large-scale parallel applications being developed with end users in physics and in atmospheric sciences. The sample application highlighted in this paper is a molecular dynamics simulation program (MD) used by physicists to study the statistical mechanics of liquids.


international parallel and distributed processing symposium | 2013

FlexIO: I/O Middleware for Location-Flexible Scientific Data Analytics

Fang Zheng; Hongbo Zou; Greg Eisenhauer; Karsten Schwan; Matthew Wolf; Jai Dayal; Tuan-Anh Nguyen; Jianting Cao; Hasan Abbasi; Scott Klasky; Norbert Podhorszki; Hongfeng Yu

Increasingly severe I/O bottlenecks on High-End Computing machines are prompting scientists to process simulation output data online while simulations are running and before storing data on disk. There are several options to place data analytics along the I/O path: on compute nodes, on separate nodes dedicated to analytics, or after data is stored on persistent storage. Since different placements have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. The FlexIO middleware described in this paper makes it easy for scientists to obtain such flexibility, by offering simple abstractions and diverse data movement methods to couple simulation with analytics. Various placement policies can be built on top of FlexIO to exploit the trade-offs in performing analytics at different levels of the I/O hierarchy. Experimental results demonstrate that FlexIO can support a variety of simulation and analytics workloads at large scale through flexible placement options, efficient data movement, and dynamic deployment of data manipulation functionalities.


ieee international conference on high performance computing data and analytics | 2013

GoldRush: resource efficient in situ scientific data analytics using fine-grained interference aware execution

Fang Zheng; Hongfeng Yu; Can Hantaş; Matthew Wolf; Greg Eisenhauer; Karsten Schwan; Hasan Abbasi; Scott Klasky

Severe I/O bottlenecks on High End Computing platforms call for running data analytics in situ. Demonstrating that there exist considerable resources in compute nodes un-used by typical high end scientific simulations, we leverage this fact by creating an agile runtime, termed GoldRush, that can harvest those otherwise wasted, idle resources to efficiently run in situ data analytics. GoldRush uses fine-grained scheduling to “steal” idle resources, in ways that minimize interference between the simulation and in situ analytics. This involves recognizing the potential causes of on-node resource contention and then using scheduling methods that prevent them. Experiments with representative science applications at large scales show that resources harvested on compute nodes can be leveraged to perform useful analytics, significantly improving resource efficiency, reducing data movement costs incurred by alternate solutions, and posing negligible impact on scientific simulations.


IEEE Concurrency | 1998

From interactive applications to distributed laboratories

Beth Plale; Greg Eisenhauer; Karsten Schwan; Jeremy M. Heiner; Vernard Martin; Jeffrey S. Vetter

Distributed laboratories let scientists and engineers access interactive visualization tools and simulation computations so they can collaborate online, regardless of their geographic location. This article reports on efforts at Georgia Tech to improve the technologies that make this possible.


IEEE Transactions on Parallel and Distributed Systems | 2002

Native data representation: An efficient wire format for high-performance distributed computing

Greg Eisenhauer; Fabián E. Bustamante; Karsten Schwan

New trends in high-performance software development such as tool- and component-based approaches have increased the need for flexible and high-performance communication systems. When trying to reap the well-known benefits of these approaches, the question of what communication infrastructure should be used to link the various components arises. In this context, flexibility and high-performance seem to be incompatible goals. Traditional HPC-style communication libraries, such as MPI, offer good performance, but are not intended for loosely-coupled systems. Object- and metadata-based approaches like XML offer the needed plug-and-play flexibility, but with significantly lower performance. We observe that the flexibility and baseline performance of data exchange systems are strongly determined by their wire formats, or by how they represent data for transmission in heterogeneous environments. After examining the performance implications of using a number of different wire formats, we propose an alternative approach for flexible high-performance data exchange, Native Data Representation, and evaluate its current implementation in the portable binary I/O library.


acm ifip usenix international conference on middleware | 2007

iManage: policy-driven self-management for enterprise-scale systems

Vibhore Kumar; Brian F. Cooper; Greg Eisenhauer; Karsten Schwan

It is obvious that big, complex enterprise systems are hard to manage. What is not obvious is how to make them more manageable. Although there is a growing body of research into system self-management, many techniques are either too narrow, focusing on a single component rather than the entire system, or not robust enough, failing to scale or respond to the full range of an administrators needs. In our iManage system we have developed a policy-driven system modeling framework that aims to bridge the gap between manageable components and manageable systems. In particular, iManage provides: (1) system statespace partitioning, which divides a large system state-space into partitions that are more amenable to constructing system models and developing policies, (2) online model and policy adaptation to allow the self-management infrastructure to deal gracefully with changes in operating environment, system configuration, and workload, and (3) tractability and trust, where tractability allows an administrator to understand why the system chose a particular policy and also influence that decision, and trust allows an administrator to understand the systems confidence in a proposed, automated action. Simulations driven by scenarios given to us by our industrial collaborators demonstrate that iManage is effective both at constructing useful system models and in using those models to drive automated system management.


high performance distributed computing | 1994

Falcon-toward interactive parallel programs: the on-line steering of a molecular dynamics application

Greg Eisenhauer; Karsten Schwan; Weiming Gu; Niru Mallavarupu

The paper focuses on the opportunities and costs of online steering as applied to a substantial parallel application. We demonstrate potential performance improvements through the use of the Falcon system, an experimental system for the online monitoring and steering of parallel programs. The visual presentation of program output along with animated displays of program performance information via Falcons monitoring system enables the online capture, analysis, and display of program information required for program steering. Falcon also provides the mechanisms for the manipulations of program state that accomplish this online steering.<<ETX>>

Collaboration


Dive into the Greg Eisenhauer's collaboration.

Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Klasky

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Hasan Abbasi

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Norbert Podhorszki

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Fang Zheng

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jai Dayal

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhongtang Cai

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ada Gavrilovska

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge