Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandra Krintz is active.

Publication


Featured researches published by Chandra Krintz.


architectural support for programming languages and operating systems | 1998

Cache-conscious data placement

Brad Calder; Chandra Krintz; Simmi John; Todd M. Austin

As the gap between memory and processor speeds continues to widen, cache eficiency is an increasingly important component of processor performance. Compiler techniques have been used to improve instruction cache pet


international conference on cloud computing | 2009

AppScale: Scalable and Open AppEngine Application Development and Deployment

Navraj Chohan; Chris Bunch; Sydney Pang; Chandra Krintz; Nagy Mostafa; Sunil Soman; Richard Wolski

ormance by mapping code with temporal locality to different cache blocks in the virtual address space eliminating cache conflicts. These code placement techniques can be applied directly to the problem of placing data for improved data cache pedormance.In this paper we present a general framework for Cache Conscious Data Placement. This is a compiler directed approach that creates an address placement for the stack (local variables), global variables, heap objects, and constants in order to reduce data cache misses. The placement of data objects is guided by a temporal relationship graph between objects generated via profiling. Our results show that profile driven data placement significantly reduces the data miss rate by 24% on average.


ieee international conference on high performance computing data and analytics | 2006

Paravirtualization for HPC systems

Lamia Youseff; Richard Wolski; Brent C. Gorda; Chandra Krintz

We present the design and implementation of AppScale, an open source extension to the Google AppEngine (GAE) Platform-as- a-Service (PaaS) cloud technology. Our extensions build upon the GAE SDK to facilitate distributed execution of GAE applications over virtualized cluster resources, including Infrastructure-as-a-Service (IaaS) cloud systems such as Amazon’s AWS/EC2 and Epucalyptus. AppScale provides a framework with which researchers can investigate the interaction between PaaS and IaaS systems as well as the inner workings of, and new technologies for, PaaS cloud technologies using real GAE applications.


programming language design and implementation | 2001

Using annotations to reduce dynamic optimization time

Chandra Krintz; Brad Calder

In this work, we investigate the efficacy of using paravirtualizing software for performance-critical HPC kernels and applications. We present a comprehensive performance evaluation of Xen, a low-overhead, Linux-based, virtual machine monitor, for paravirtualization of HPC cluster systems at LLNL. We investigate subsystem and overall performance using a wide range of benchmarks and applications. We employ statistically sound methods to compare the performance of a paravirtualized kernel against three Linux operating systems: RedHat Enterprise 4 for build versions 2.6.9 and 2.6.12 and the LLNL CHAOS kernel. Our results indicate that Xen is very efficient and practical for HPC systems.


First International Workshop on Virtualization Technology in Distributed Computing (VTDC 2006) | 2006

Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems

Lamia Youseff; Rich Wolski; Brent C. Gorda; Chandra Krintz

Dynamic compilation and optimization are widely used in heterogenous computing environments, in which an intermediate form of the code is compiled to native code during execution. An important trade off exists between the amount of time spent dynamically optimizing the program and the running time of the program. The time to perform dynamic optimizations can cause significant delays during execution and also prohibit performance gains that result from more complex optimization. In this research, we present an annotation framework that substantially reduces compilation overhead of Java programs. Annotations consist of analysis information collected off-line and are incorporated into Java programs. The annotations are then used by dynamic compilers to guide optimization. The annotations we present reduce compilation overhead incurred at all stages of compilation and optimization as well as enable complex optimizations to be performed dynamically. On average, our annotation optimizations reduce optimized compilation overhead by 78% and enable speedups of 7% on average for the programs examined.


symposium on code generation and optimization | 2006

Online Phase Detection Algorithms

Priya Nagpurkar; Chandra Krintz; Michael Hind; Peter F. Sweeney; V. T. Rajan

Virtualization has become increasingly popular for enabling full system isolation, load balancing, and hardware multiplexing for high-end server systems. Virtualizing software has the potential to benefit HPC systems similarly by facilitating efficient cluster management, application isolation, full-system customization, and process migration. However, virtualizing software is not currently employed in HPC environments due to its perceived overhead. In this work, we investigate the overhead imposed by the popular, open-source, Xen virtualization system, on performance-critical HPC kernels and applications. We empirically evaluate the impact of Xen on both communication and computation and compare its use to that of a customized kernel using HPC cluster resources at Lawrence Livermore National Lab (LLNL). We also employ statistically sound methods to compare the performance of a para virtualized kernel against three popular Linux operating systems: RedHat Enterprise 4 (RHEL4) for build versions 2.6.9 and 2.6.12 and the LLNL CHAOS kernel, a specialized version of RHEL4. Our results indicate that Xen is very efficient and practical for HPC systems.


international symposium on memory management | 2004

Dynamic selection of application-specific garbage collectors

Sunil Soman; Chandra Krintz; David F. Bacon

Todays virtual machines (VMs) dynamically optimize an application as it is executing, often employing optimizations that are specialized for the current execution profile. An online phase detector determines when an executing program is in a stable period of program execution (a phase) or is in transition. A VM using an online phase detector can apply specialized optimizations during a phase or reconsider optimization decisions between phases. Unfortunately, extant approaches to detecting phase behavior rely on either offline profiling, hardware support, or are targeted toward a particular optimization. In this work, we focus on the enabling technology of online phase detection. More specifically, we contribute (a) a novel framework for online phase detection, (b) multiple instantiations of the framework that produce novel online phase detection algorithms, (c) a novel client- and machine-independent baseline methodology for evaluating the accuracy of an online phase detector, (d) a metric to compare online detectors to this baseline, and (e) a detailed empirical evaluation, using Java applications, of the accuracy of the numerous phase detectors.


Software - Practice and Experience | 2001

Reducing the Overhead of Dynamic Compilation

Chandra Krintz; David Grove; Vivek Sarkar; Brad Calder

Much prior work has shown that the performance enabled by garbage collection (GC) systems is highly dependent upon the behavior of the application as well as on the available resources. That is, no single GC enables the best performance for all programs and all heap sizes. To address this limitation, we present the design, implementation, and empirical evaluation of a novel Java Virtual Machine (JVM) extension that facilitates dynamic switching between a number of very different and popular garbage collectors. We also show how to exploit this functionality using annotation-guided GC selection and evaluate the system using a large number of benchmarks. In addition, we implement and evaluate a simple heuristic to investigate the efficacy of switching automatically. Our results show that, on average, our annotation-guided system introduces less than 4% overhead and improves performance by 24% over the worst-performing GC (across heap sizes) and by 7% over always using the popular Generational/Mark-Sweep hybrid.


conference on object-oriented programming systems, languages, and applications | 1999

Reducing transfer delay using Java class file splitting and prefetching

Brad Calder; Chandra Krintz; Urs Hölzle

The execution model for mobile, dynamically‐linked, object‐oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, dynamic compilation, which is performed while the application is running, introduces execution delay. In this paper we present two dynamic compilation techniques that enable high performance execution while reducing the effect of this compilation overhead. These techniques can be classified as (1) decreasing the amount of compilation performed, and (2) overlapping compilation with execution.


international conference on mobile systems, applications, and services | 2004

NWSLite: a light-weight prediction utility for mobile devices

Selim Gurun; Chandra Krintz; Richard Wolski

The proliferation of the Internet is fueling the development of mobile computing environments in which mobile code is executed on remote sites. In such environments, the end user must often wait while the mobile program is transferred from the server to the client where it executes. This downloading can create significant delays, hurting the interactive experience of users. We propose Java class file splitting and class file prefetching optimizations in order to reduce transfer delay. Class file splitting moves the infrequently used part of a class file into a corresponding cold class file to reduce the number of bytes transferred. Java class file prefetching is used to overlap program transfer delays with program execution. Our splitting and prefetching compiler optimizations do not require any change to the Java Virtual Machine, and thus can be used with existing Java implementations. Class file splitting reduces the startup time for Java programs by 10% on average, and class file splitting used with prefetching reduces the overall transfer delay encountered during a mobile programs execution by 25% to 30% on average.

Collaboration


Dive into the Chandra Krintz's collaboration.

Top Co-Authors

Avatar

Rich Wolski

University of California

View shared research outputs
Top Co-Authors

Avatar

Richard Wolski

University of California

View shared research outputs
Top Co-Authors

Avatar

Navraj Chohan

University of California

View shared research outputs
Top Co-Authors

Avatar

Chris Bunch

University of California

View shared research outputs
Top Co-Authors

Avatar

Selim Gurun

University of California

View shared research outputs
Top Co-Authors

Avatar

Sunil Soman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brad Calder

University of California

View shared research outputs
Top Co-Authors

Avatar

Lingli Zhang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge