Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clark Verbrugge is active.

Publication


Featured researches published by Clark Verbrugge.


conference on object oriented programming systems languages and applications | 2003

Dynamic metrics for java

Bruno Dufour; Karel Driesen; Laurie J. Hendren; Clark Verbrugge

In order to perform meaningful experiments in optimizing compilation and run-time system design, researchers usually rely on a suite of benchmark programs of interest to the optimization technique under consideration. Programs are described as numeric, memory-intensive, concurrent, or object-oriented, based on a qualitative appraisal, in some cases with little justification. We believe it is beneficial to quantify the behaviour of programs with a concise and precisely defined set of metrics, in order to make these intuitive notions of program behaviour more concrete and subject to experimental validation. We therefore define and measure a set of unambiguous, dynamic, robust and architecture-independent metrics that can be used to categorize programs according to their dynamic behaviour in five areas: size, data structure, memory use, concurrency, and polymorphism. A framework computing some of these metrics for Java programs is presented along with specific results demonstrating how to use metric data to understand a programs behaviour, and both guide and evaluate compiler optimizations.


network and system support for games | 2006

Comparing interest management algorithms for massively multiplayer games

Jean-Sébastien Boulanger; Jörg Kienzle; Clark Verbrugge

Broadcasting all state changes to every player of a massively multiplayer game is not a viable solution. To successfully overcome the challenge of scale, massively multiplayer games have to employ sophisticated interest management techniques that only send relevant state changes to each player. This paper compares the performance of different interest management algorithms based on measurements obtained in a real massively multiplayer game using human and computer-generated player actions. We show that interest management algorithms that take into account obstacles in the world reduce the number of update messages between players by up to a factor of 6, and that some computationally inexpensive tile-based interest management algorithms can approximate ideal visibility-based interest management at very low cost. The experiments also show that measurements obtained with computer-controlled players performing random actions can approximate measurements of games played by real humans, provided that the starting positions of the random players are chosen adequately. As the size of the world and the number of players of massively multiplayer games increases, adaptive interest management techniques such as the ones studied in this paper will become increasingly important.


compiler construction | 2001

A Framework for Optimizing Java Using Attributes

Patrice Pominville; Feng Qian; Raja Vallée-Rai; Laurie J. Hendren; Clark Verbrugge

This paper presents a framework for supporting the optimization of Java programs using attributes in Java class files. We show how class file attributes may be used to convey both optimization opportunities and profile information to a variety of Java virtual machines including ahead-of-time compilers and just-in-time compilers.We present our work in the context of Soot, a framework that supports the analysis and transformation of Java bytecode (class files)[21,25,26]. We demonstrate the framework with attributes for elimination of array bounds and null pointer checks, and we provide experimental results for the Kaffe just-in-time compiler, and IBMs High Performance Compiler for Java ahead-of-time compiler.


conference on object-oriented programming systems, languages, and applications | 2004

Measuring the dynamic behaviour of AspectJ programs

Bruno Dufour; Christopher Goard; Laurie J. Hendren; Oege de Moor; Ganesh Sittampalam; Clark Verbrugge

This paper proposes and implements a rigorous method for studying the dynamic behaviour of AspectJ programs. As part of this methodology several new metrics specific to AspectJ programs are proposed and tools for collecting the relevant metrics are presented. The major tools consist of: (1) a modified version of the AspectJ compiler that tags bytecode instructions with an indication of the cause of their generation, such as a particular feature of AspectJ; and (2) a modified version of the *J dynamic metrics collection tool which is composed of a JVMPI-based trace generator and an analyzer which propagates tags and computes the proposed metrics. This dynamic propagation is essential, and thus this paper contributes not only new metrics, but also non-trivial ways of computing them. We furthermore present a set of benchmarks that exercise a wide range of AspectJs features, and the metrics that we measured on these benchmarks. The results provide guidance to AspectJ users on how to avoid efficiency pitfalls, to AspectJ implementors on promising areas for future optimization, and to tool builders on ways to understand the runtime behaviour of AspectJ.


international conference on parallel architectures and compilation techniques | 2007

Component-Based Lock Allocation

Richard L. Halpert; Christopher J. F. Pickett; Clark Verbrugge

The allocation of lock objects to critical sections in concurrent programs affects both performance and correctness. Recent work explores automatic lock allocation, aiming primarily to minimize conflicts and maximize parallelism by allocating locks to individual critical section interferences. We investigate component-based lock allocation, which allocates locks to entire groups of interfering critical sections. Our allocator depends on a thread-based side effect analysis, and benefits from precise points-to and may happen in parallel information. Thread-local object information has a small impact, and dynamic locks do not improve significantly on static locks. We experiment with a range of small and large Java benchmarks on 2-way, 4-way, and 8-way machines, and find that a single static lock is sufficient for mtrt, that performance degrades by 10% for hsqldb, that jbb2000 becomes mostly serialized, and that for lusearch, xalan, and jbb2005, component-based lock allocation recovers the performance of the original program.


conference on object-oriented programming systems, languages, and applications | 2003

*J: a tool for dynamic analysis of Java programs

Bruno Dufour; Laurie J. Hendren; Clark Verbrugge

We describe a complete system for gathering, computing and presenting dynamic metrics from Java programs. The system itself was motivated from our real goals in understanding program behaviour as compiler/runtime developers, and so solves a number of practical and difficult problems related to metric gathering and analysis.


ieee international conference on high performance computing data and analytics | 2004

A practical MHP information analysis for concurrent java programs

Lin Li; Clark Verbrugge

In this paper we present an implementation of May Happen in Parallel analysis for Java that attempts to address some of the practical implementation concerns of the original work. We describe a design that incorporates techniques for aiding a feasible implementation and expanding the range of acceptable inputs. We provide experimental results showing the utility and impact of our approach and optimizations using a variety of concurrent benchmarks.


compiler construction | 2002

A Comprehensive Approach to Array Bounds Check Elimination for Java

Feng Qian; Laurie J. Hendren; Clark Verbrugge

This paper reports on a comprehensive approach to eliminating array bounds checks in Java. Our approach is based upon three analyses. The first analysis is a flow-sensitive intraprocedural analysis called variable constraint analysis (VCA). This analysis builds a small constraint graph for each important point in a method and then uses the information encoded in the graph to infer the relationship between array index expressions and the bounds of the array. Using VCA as the base analysis, we also show how two further analyses can improve the results of VCA. Array field analysis is applied on each class and provides information about some arrays stored in fields, while rectangular array analysis is an interprocedural analysis to approximate the shape of arrays, and is useful for finding rectangular (non-ragged) arrays.We have implemented all three analyses using the Soot bytecode optimization/annotation framework and we transmit the results of the analysis to virtual machines using class file attributes. We have modified the Kaffe JIT and IBMs High Performance Compiler for Java (HPCJ) to make use of these attributes and we demonstrate significant speedups.


foundations of digital games | 2009

Mammoth: a massively multiplayer game research framework

Jörg Kienzle; Clark Verbrugge; Bettina Kemme; Alexandre Denault; Michael Hawker

This paper presents Mammoth, a massively multiplayer game research framework designed for experimentation in an academic setting. Mammoth provides a modular architecture where different components, such as the network engine, the replication engine, or interest management, can easily be replaced. Subgames allow a researcher to define different game goals, for instance, in order to evaluate the effects of different team-play tactics on the game performance. Mammoth also offers a modular and flexible infrastructure for the definition of non-player characters with behavior controlled by complex artificial intelligence algorithms. This paper focuses on the Mammoth architecture, demonstrating how good design practices can be used to create a modular framework where researchers from different research domains can conduct their experiments. The effectiveness of the architecture is demonstrated by several successful research projects accomplished using the Mammoth framework.


compiler construction | 2010

Optimizing MATLAB through just-in-time specialization

Maxime Chevalier-Boisvert; Laurie J. Hendren; Clark Verbrugge

Scientists are increasingly using dynamic programming languages like Matlab for prototyping and implementation. Effectively compiling Matlab raises many challenges due to the dynamic and complex nature of Matlab types. This paper presents a new JIT-based approach which specializes and optimizes functions on-the-fly based on the current types of function arguments. A key component of our approach is a new type inference algorithm which uses the run-time argument types to infer further type and shape information, which in turn provides new optimization opportunities. These techniques are implemented in McVM, our open implementation of a Matlab virtual machine. As this is the first paper reporting on McVM, a brief introduction to McVM is also given. We have experimented with our implementation and compared it to several other Matlab implementations, including the Mathworks proprietary system, McVM without specialization, the Octave open-source interpreter and the McFor static compiler. The results are quite encouraging and indicate that specialization is an effective optimization—McVM with specialization outperforms Octave by a large margin and also sometimes outperforms the Mathworks implementation.

Collaboration


Dive into the Clark Verbrugge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge