Kris Venstermans
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kris Venstermans.
high performance embedded architectures and compilers | 2005
Dries Buytaert; Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
This paper shows that Appel-style garbage collectors often make suboptimal decisions both in terms of when and how to collect. We argue that garbage collection should be done when the amount of live bytes is low (in order to minimize the collection cost) and when the amount of dead objects is high (in order to maximize the available heap size after collection). In addition, we observe that Appel-style collectors sometimes trigger a nursery collection in cases where a full-heap collection would have been better. Based on these observations, we propose garbage collection hints (GCH) which is a profile-directed method for guiding garbage collection. Offline profiling is used to identify favorable collection points in the program code. In those favorable collection points, the VM dynamically chooses between nursery and full-heap collections based on an analytical garbage collector cost-benefit model. By doing so, GCH guides the collector in terms of when and how to collect. Experimental results using the SPECjvm98 benchmarks and two generational garbage collectors show that substantial reductions can be obtained in garbage collection time (up to 30X) and that the overall execution time can be reduced by more than 10%.
ACM Transactions on Architecture and Code Optimization | 2007
Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
Memory performance is an important design issue for contemporary computer systems given the huge processor/memory speed gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java virtual machines. We completely eliminate the object header through typed virtual addressing (TVA) or implicit typing. TVA encodes the object type in the objects virtual address by allocating all objects of a given type in a contiguous memory segment. This allows for removing the type information as well as the status field from the object header. Whenever type and status information is needed, masking is applied to the objects virtual address for obtaining an offset into type and status information structures. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated object types, hence, the name selective TVA (STVA); this limits the amount of memory fragmentation. In addition to applying STVA, we also compress the type information block (TIB) pointers for all objects that do not fall under TVA. We implement the space-efficient Java object model in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against the traditionally used Java object model using a multitude of Java benchmarks. We conclude that the space-efficient Java object model reduces memory consumption by on average 15% (and up to 45% for some benchmarks). About one-half the reduction comes from TIB pointer compression; the other one-half comes from STVA. In terms of performance, the space-efficient object model generally does not affect performance; however, for some benchmarks we observe statistically significant performance speedups, up to 20%.
Software - Practice and Experience | 2006
Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
The Java language is popular because of its platform independence, making it useful in a lot of technologies ranging from embedded devices to high‐performance systems. The platform‐independent property of Java, which is visible at the Java bytecode level, is only made possible thanks to the availability of a Virtual Machine (VM), which needs to be designed specifically for each underlying hardware platform. More specifically, the same Java bytecode should run properly on a 32‐bit or a 64‐bit VM. In this paper, we compare the behavioral characteristics of 32‐bit and 64‐bit VMs using a large set of Java benchmarks. This is done using the Jikes Research VM as well as the IBM JDK 1.4.0 production VM on a PowerPC‐based IBM machine. By running the PowerPC machine in both 32‐bit and 64‐bit mode we are able to compare 32‐bit and 64‐bit VMs. We conclude that the space an object takes in the heap in 64‐bit mode is 39.3% larger on average than in 32‐bit mode. We identify three reasons for this: (i) the larger pointer size, (ii) the increased header and (iii) the increased alignment. The minimally required heap size is 51.1% larger on average in 64‐bit than in 32‐bit mode. From our experimental setup using hardware performance monitors, we observe that 64‐bit computing typically results in a significantly larger number of data cache misses at all levels of the memory hierarchy. In addition, we observe that when a sufficiently large heap is available, the IBM JDK 1.4.0 VM is 1.7% slower on average in 64‐bit mode than in 32‐bit mode. Copyright
high performance embedded architectures and compilers | 2007
Dries Buytaert; Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
This paper shows that Appel-style garbage collectors often make suboptimal decisions both in terms of whenand howto collect. We argue that garbage collection should be done when the amount of live bytes is low (in order to minimize the collection cost) and when the amount of dead objects is high (in order to maximize the available heap size after collection). In addition, we observe that Appel-style collectors sometimes trigger a nursery collection in cases where a full-heap collection would have been better. Based on these observations, we propose garbage collection hints (GCH)which is a profile-directed method for guiding garbage collection. Off-line profiling is used to identify favorable collection points in the program code. In those favorable collection points, the garbage collector dynamically chooses between nursery and full-heap collections based on an analytical garbage collector cost-benefit model. By doing so, GCH guides the collector in terms of whenand howto collect. Experimental results using the SPECjvm98 benchmarks and two generational garbage collectors show that substantial reductions can be obtained in garbage collection time (up to 29X) and that the overall execution time can be reduced by more than 10%. In addition, we also show that GCH reduces the maximum pause times and outperforms user-inserted forced garbage collections.
symposium on code generation and optimization | 2006
Kris Venstermans; Lieven Eeckhout; K. De Bosschere
Memory performance is an important design issue for contemporary systems given the ever increasing memory gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java virtual machines. We propose selective typed virtual addressing (STVA) which uses typed virtual addressing (TVA) or implicit typing for reducing the header of 64-bit Java objects. The idea behind TVA is to encode the objects type in the objects virtual address. In other words, all objects of a given type are allocated in a contiguous memory segment. As such, the type information can be removed from the objects header which reduces the number of allocated bytes per object. Whenever type information is needed for the given object, masking is applied to the objects virtual address. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated and/or long-lived object types. This limits the amount of memory fragmentation. We implement STVA in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against a traditional VM implementation without STVA using a multitude of Java benchmarks. We conclude that STVA reduces memory consumption by on average 15.5% (and up to 39% for some benchmarks). In terms of performance, STVA generally does not affect performance, however for some benchmarks we observe statistically significant performance speedups, up to 24%.
european conference on object oriented programming | 2007
Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
64-bit address spaces come at the price of pointers requiring twice as much memory as 32-bit address spaces, resulting in increased memory usage. This paper reduces the memory usage of 64-bit pointers in the context of Java virtual machines through pointer compression, called Object-Relative Addressing (ORA). The idea is to compress 64-bit raw pointers into 32-bit offsets relative to the referencing objects virtual address. Unlike previous work on the subject using a constant base address for compressed pointers, ORA allows for applying pointer compression to Java programs that allocate more than 4GB of memory. Our experimental results using Jikes RVM and the SPECjbb and DaCapo benchmarks on an IBM POWER4 machine show that the overhead introduced by ORA is statistically insignificant on average compared to raw 64-bit pointer representation, while reducing the total memory usage by 10% on average and up to 14.5% for some applications.
Lecture Notes in Computer Science | 2007
Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
Fourth FTW PhD Symposium | 2003
Kris Venstermans; Koen De Bosschere
Software - Practice and Experience | 2006
Kris Venstermans; Lieven Eeckhout; Koen De Bosschere
ProRisc 2003 | 2003
Kris Venstermans; Koen De Bosschere