Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andy Georges is active.

Publication


Featured researches published by Andy Georges.


conference on object oriented programming systems languages and applications | 2007

Statistically rigorous java performance evaluation

Andy Georges; Dries Buytaert; Lieven Eeckhout

Java performance is far from being trivial to benchmark because it is affected by various factors such as the Java application, its input, the virtual machine, the garbage collector, the heap size, etc. In addition, non-determinism at run-time causes the execution time of a Java program to differ from run to run. There are a number of sources of non-determinism such as Just-In-Time (JIT) compilation and optimization in the virtual machine (VM) driven by timer-based method sampling, thread scheduling, garbage collection, and various. There exist a wide variety of Java performance evaluation methodologies usedby researchers and benchmarkers. These methodologies differ from each other in a number of ways. Some report average performance over a number of runs of the same experiment; others report the best or second best performance observed; yet others report the worst. Some iterate the benchmark multiple times within a single VM invocation; others consider multiple VM invocations and iterate a single benchmark execution; yet others consider multiple VM invocations and iterate the benchmark multiple times. This paper shows that prevalent methodologies can be misleading, and can even lead to incorrect conclusions. The reason is that the data analysis is not statistically rigorous. In this paper, we present a survey of existing Java performance evaluation methodologies and discuss the importance of statistically rigorous data analysis for dealing with non-determinism. We advocate approaches to quantify startup as well as steady-state performance, and, in addition, we provide the JavaStats software to automatically obtain performance numbers in a rigorous manner. Although this paper focuses on Java performance evaluation, many of the issues addressed in this paper also apply to other programming languages and systems that build on a managed runtime system.


ambient intelligence | 2004

Towards an Extensible Context Ontology for Ambient Intelligence

Davy Preuveneers; Jan Van den Bergh; Dennis Wagelaar; Andy Georges; Peter Rigole; Tim Clerckx; Yolande Berbers; Karin Coninx; Viviane Jonckers; Koen De Bosschere

To realise an Ambient Intelligence environment, it is paramount that applications can dispose of information about the context in which they operate, preferably in a very general manner. For this purpose various types of information should be assembled to form a representation of the context of the device on which aforementioned applications run. To allow interoperability in an Ambient Intelligence environment, it is necessary that the context terminology is commonly understood by all participating devices. In this paper we propose an adaptable and extensible context ontology for creating context-aware computing infrastructures, ranging from small embedded devices to high-end service platforms. The ontology has been designed to solve several key challenges in Ambient Intelligence, such as application adaptation, automatic code generation and code mobility, and generation of device specific user interfaces.


international conference on parallel architectures and compilation techniques | 2006

Performance prediction based on inherent program similarity

Kenneth Hoste; Aashish Phansalkar; Lieven Eeckhout; Andy Georges; Lizy Kurian John; Koen De Bosschere

A key challenge in benchmarking is to predict the performance of an application of interest on a number of platforms in order to determine which platform yields the best performance. This paper proposes an approach for doing this. We measure a number of microarchitecture-independent characteristics from the application of interest, and relate these characteristics to the characteristics of the programs from a previously profiled benchmark suite. Based on the similarity of the application of interest with programs in the benchmark suite, we make a performance prediction of the application of interest. We propose and evaluate three approaches (normalization, principal components analysis and genetic algorithm) to transform the raw data set of microarchitecture-independent characteristics into a benchmark space in which the relative distance is a measure for the relative performance differences. We evaluate our approach using all of the SPEC CPU2000 benchmarks and real hardware performance numbers from the SPEC website. Our framework estimates per-benchmark machine ranks with a 0.89 average and a 0.80 worst case rank correlation coefficient.


conference on object oriented programming systems languages and applications | 2003

How java programs interact with virtual machines at the microarchitectural level

Lieven Eeckhout; Andy Georges; Koenraad De Bosschere

Java workloads are becoming increasingly prominent on various platforms ranging from embedded systems, over general-purpose computers to high-end servers. Understanding the implications of all the aspects involved when running Java workloads, is thus extremely important during the design of a system that will run such workloads. In other words, understanding the interaction between the Java application, its input and the virtual machine it runs on, is key to a succesful design. The goal of this paper is to study this complex interaction at the microarchitectural level, e.g., by analyzing the branch behavior, the cache behavior, etc. This is done by measuring a large number of performance characteristics using performance counters on an AMD K7 Duron microprocessor. These performance characteristics are measured for seven virtual machine configurations, and a collection of Java benchmarks with corresponding inputs coming from the SPECjvm98 benchmark suite, the SPECjbb2000 benchmark suite, the Java Grande Forum benchmark suite and an open-source raytracer, called Raja with 19 scene descriptions. This large amount of data is further analyzed using statistical data analysis techniques, namely principal components analysis and cluster analysis. These techniques provide useful insights in an understandable way.From our experiments, we conclude that (i) the behavior observed at the microarchitectural level is primarily determined by the virtual machine for small input sets, e.g., the SPECjvm98 s1 input set; (ii) the behavior can be quite different for various input sets, e.g., short-running versus long-running benchmarks; (iii) for long-running benchmarks with few hot spots, the behavior can be primarily determined by the Java program and not the virtual machine, i.e., all the virtual machines optimize the hot spots to similarly behaving native code; (iv) in general, the behavior of a Java application running on one virtual machine can be significantly different from running on another virtual machine. These conclusions warn researchers working on Java workloads to be careful when using a limited number of Java benchmarks or virtual machines since this might lead to biased conclusions.


Software - Practice and Experience | 2004

JaRec: a portable record/replay environment for multi-threaded Java applications

Andy Georges; Mark Christiaens; Michiel Ronsse; K. De Bosschere

This paper describes JaRec, a portable record/replay system for Java. It correctly replays multi‐threaded, data‐race free Java applications, by recording the order of synchronization operations, and by executing them in the same order during replay. The record/replay infrastructure is developed in Java, and does not require a modification of the Java Virtual Machine (JVM) if it provides the JVM Profiler Interface (JVMPI). If the JVM does not support JVMPI, which is used for intercepting the loaded classes, only a minor modification to the JVM is required in order to run the system. On ystems with limited memory resources, JaRec can be executed in a distributed fashion. This also makes it suitable to aid debugging of multi‐threaded applications on embedded systems. Copyright


workshop on fault diagnosis and tolerance in cryptography | 2008

Exploiting Hardware Performance Counters

Leif Uhsadel; Andy Georges; Ingrid Verbauwhede

We introduce the usage of hardware performance counters (HPCs) as a new method that allows very precise access to known side channels and also allows access to many new side channels. Many current architectures provide hardware performance counters, which allow the profiling of software during runtime. Though they allow detailed profiling they are noisy by their very nature; HPC hardware is not validated along with the rest of the microprocessor. They are meant to serve as a relative measure and are most commonly used for profiling software projects or operating systems. Furthermore they are only accessible in restricted mode and can only be accessed by the operating system. We discuss this security model and we show first implementation results, which confirm that HPCs can be used to profile relatively short sequences of instructions with high precision. We focus on cache profiling and confirm our results by rerunning a recently published time based cache attack in which we replaced the time profiling function by HPCs.


conference on object-oriented programming systems, languages, and applications | 2008

Java performance evaluation through rigorous replay compilation

Andy Georges; Lieven Eeckhout; Dries Buytaert

A managed runtime environment, such as the Java virtual machine, is non-trivial to benchmark. Java performance is affected in various complex ways by the application and its input, as well as by the virtual machine (JIT optimizer, garbage collector, thread scheduler, etc.). In addition, non-determinism due to timer-based sampling for JIT optimization, thread scheduling, and various system effects further complicate the Java performance benchmarking process. Replay compilation is a recently introduced Java performance analysis methodology that aims at controlling non-determinism to improve experimental repeatability. The key idea of replay compilation is to control the compilation load during experimentation by inducing a pre-recorded compilation plan at replay time. Replay compilation also enables teasing apart performance effects of the application versus the virtual machine. This paper argues that in contrast to current practice which uses a single compilation plan at replay time, multiple compilation plans add statistical rigor to the replay compilation methodology. By doing so, replay compilation better accounts for the variability observed in compilation load across compilation plans. In addition, we propose matched-pair comparison for statistical data analysis. Matched-pair comparison considers the performance measurements per compilation plan before and after an innovation of interest as a pair, which enables limiting the number of compilation plans needed for accurate performance analysis compared to statistical analysis assuming unpaired measurements.


conference on object oriented programming systems languages and applications | 2007

Using hpm-sampling to drive dynamic compilation

Dries Buytaert; Andy Georges; Michael Hind; Matthew Arnold; Lieven Eeckhout; Koen De Bosschere

All high-performance production JVMs employ an adaptive strategy for program execution. Methods are first executed unoptimized and then an online profiling mechanism is used to find a subset of methods that should be optimized during the same execution. This paper empirically evaluates the design space of several profilers for initiating dynamic compilation and shows that existing online profiling schemes suffer from several limitations. They provide an insufficient number of samples, are untimely, and have limited accuracy at determining the frequently executed methods. We describe and comprehensively evaluate HPM-sampling, a simple but effective profiling scheme for finding optimization candidates using hardware performance monitors (HPMs) that addresses the aforementioned limitations. We show that HPM-sampling is more accurate; has low overhead; and improves performance by 5.7% on average and up to 18.3% when compared to the default system in Jikes RVM, without changing the compiler.


symposium on code generation and optimization | 2010

Automated just-in-time compiler tuning

Kenneth Hoste; Andy Georges; Lieven Eeckhout

Managed runtime systems, such as a Java virtual machine (JVM), are complex pieces of software with many interacting components. The Just-In-Time (JIT) compiler is at the core of the virtual machine, however, tuning the compiler for optimum performance is a challenging task. There are (i) many compiler optimizations and options, (ii) there may be multiple optimization levels (e.g., -O0, -O1, -O2), each with a specific optimization plan consisting of a collection of optimizations, (iii) the Adaptive Optimization System (AOS) that decides which method to optimize to which optimization level requires fine-tuning, and (iv) the effectiveness of the optimizations depends on the application as well as on the hardware platform. Current practice is to manually tune the JIT compiler which is both tedious and very time-consuming, and in addition may lead to suboptimal performance. This paper proposes automated tuning of the JIT compiler through multi-objective evolutionary search. The proposed framework (i) identifies optimization plans that are Pareto-optimal in terms of compilation time and code quality, (ii) assigns these plans to optimization levels, and (iii) fine-tunes the AOS accordingly. The key benefit of our framework is that it automates the entire exploration process, which enables tuning the JIT compiler for a given hardware platform and/or application at very low cost. By automatically tuning Jikes RVM using our framework for average performance across the DaCapo and SPECjvm98 benchmark suites, we achieve similar performance to the hand-tuned default Jikes RVM. When optimizing the JIT compiler for individual benchmarks, we achieve statistically significant speedups for most benchmarks, up to 40% for startup and up to 19% for steady-state performance. We also show that tuning the JIT compiler for a new hardware platform can yield significantly better performance compared to using a JIT compiler that was tuned for another platform.


ieee international conference on high performance computing data and analytics | 2012

EasyBuild: Building Software with Ease

Kenneth Hoste; Jens Timmerman; Andy Georges; Stijn De Weirdt

Maintaining a collection of software installations for a diverse user base can be a tedious, repetitive, error-prone and time-consuming task. Because most end-user software packages for an HPC environment are not readily available in existing OS package managers, they require significant extra effort from the user support team. Reducing this effort would free up a large amount of time for tackling more urgent tasks. In this work, we present EasyBuild, a software installation framework written in Python that aims to support the various installation procedures used by the vast collection of software packages that are typically installed in an HPC environment - catering to widely different user profiles. It is built on top of existing tools, and provides support for well-established installation procedures. Supporting customised installation procedures requires little effort, and sharing implementations of installation procedures becomes very easy. Installing software packages that are supported can be done by issuing a single command, even if dependencies are not available yet. Hence, it simplifies the task of HPC site support teams, and even allows end-users to keep their software installations consistent and up to date.

Collaboration


Dive into the Andy Georges's collaboration.

Top Co-Authors

Avatar

Lieven Eeckhout

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lieven Eeckhout

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beau Piccart

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Hendrik Blockeel

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge