Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Motohiro Kawahito is active.

Publication


Featured researches published by Motohiro Kawahito.


conference on object-oriented programming systems, languages, and applications | 2001

A dynamic optimization framework for a Java just-in-time compiler

Toshio Suganuma; Toshiaki Yasue; Motohiro Kawahito; Hideaki Komatsu; Toshio Nakatani

The high performance implementation of Java Virtual Machines (JVM) and just-in-time (JIT) compilers is directed toward adaptive compilation optimizations on the basis of online runtime profile information. This paper describes the design and implementation of a dynamic optimization framework in a production-level Java JIT compiler. Our approach is to employ a mixed mode interpreter and a three level optimizing compiler, supporting quick, full, and special optimization, each of which has a different set of tradeoffs between compilation overhead and execution speed. a lightweight sampling profiler operates continuously during the entire programs exectuion. When necessary, detailed information on runtime behavior is collected by dynmiacally generating instrumentation code which can be installed to and uninstalled from the specified recompilation target code. Value profiling with this instrumentation mechanism allows fully automatic code specialization to be performed on the basis of specific parameter values or global data at the highest optimization level. The experimental results show that our approach offers high performance and a low code expansion ratio in both program startup and steady state measurements in comparison to the compile-only approach, and that the code specialization can also contribute modest performance improvement


conference on object-oriented programming systems, languages, and applications | 2000

A study of devirtualization techniques for a Java Just-In-Time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Hideaki Komatsu; Toshio Nakatani

Many devirtualization techniques have been proposed to reduce the runtime overhead of dynamic method calls for various object-oriented languages, however, most of them are less effective or cannot be applied for Java in a straightforward manner. This is partly because Java is a statically-typed language and thus transforming a dynamic call to a static one does not make a tangible performance gain (owing to the low overhead of accessing the method table) unless it is inlined, and partly because the dynamic class loading feature of Java prohibits the whole program analysis and optimizations from being applied.We propose a new technique called direct devirtualization with the code patching mechanism. For a given dynamic call site, our compiler first determines whether the call can be devirtualized, by analyzing the current class hierarchy. When the call is devirtualizable and the target method is suitably sized, the compiler generates the inlined code of the method, together with the backup code of making the dynamic call. Only the inlined code is actually executed until our assumption about the devirtualization becomes invalidated, at which time the compiler performs code patching to make the backup code executed subsequently. Since the new technique prevents some code motions across the merge point between the inlined code and the backup code, we have furthermore implemented recently-known analysis techniques, such as type analysis and preexistence analysis, which allow the backup code to be completely eliminated. We made various experiments using 16 real programs to understand the effectiveness and characteristics of the devirtualization techniques in our Java Just-In-Time (JIT) compiler. In summary, we reduced the number of dynamic calls by ranging from 8.9% to 97.3% (the average of 40.2%), and we improved the execution performance by ranging from -1% to 133% (with the geometric mean of 16%).


Proceedings of the ACM 1999 conference on Java Grande | 1999

Design, implementation, and evaluation of optimizations in a just-in-time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Mikio Takeuchi; Takeshi Ogasawara; Toshio Suganuma; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

The Java language incurs a runtime overhead for exception checks and object accesses without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A “JustIn-Time” (JIT) compiler generates native code from Java byte code at runtime. It must improve the runtime performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for the JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and resolution of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98 and two JavaSoft applications with byte code sizes ranging from 20000 to 280000 bytes. Each optimization contributes to an improvement in the performance of the programs.


Concurrency and Computation: Practice and Experience | 2000

Design, implementation, and evaluation of optimizations in a JavaTM Just-In-Time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Mikio Takeuchi; Takeshi Ogasawara; Toshio Suganuma; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

The Java language incurs a runtime overhead for exception checks and object accesses, which are executed without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A ‘Just-In-Time’ (JIT) compiler generates native code from Java byte code at runtime. It must improve the runtime performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for a JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and devirtualization of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98, its candidates, and two JavaSoft applications with byte code sizes ranging from 23 000 to 280 000 bytes. Each optimization contributes to an improvement in the performance of the programs. Copyright


conference on object oriented programming systems languages and applications | 2003

Effectiveness of cross-platform optimizations for a java just-in-time compiler

Kazuaki Ishizaki; Mikio Takeuchi; Kiyokuni Kawachiya; Toshio Suganuma; Osamu Gohda; Tatsushi Inagaki; Akira Koseki; Kazunori Ogata; Motohiro Kawahito; Toshiaki Yasue; Takeshi Ogasawara; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

This paper describes the system overview of our Java Just-In-Time (JIT) compiler, which is the basis for the latest production version of IBM Java JIT compiler that supports a diversity of processor architectures including both 32-bit and 64-bit modes, CISC, RISC, and VLIW architectures. In particular, we focus on the design and evaluation of the cross-platform optimizations that are common across different architectures. We studied the effectiveness of each optimization by selectively disabling it in our JIT compiler on three different platforms: IA-32, IA-64, and PowerPC. Our detailed measurements allowed us to rank the optimizations in terms of the greatest performance improvements with the smallest compilation times. The identified set includes method inlining only for tiny methods, exception check eliminations using forward dataflow analysis and partial redundancy elimination, scalar replacement for instance and class fields using dataflow analysis, optimizations for type inclusion checks, and the elimination of merge points in the control flow graphs. These optimizations can achieve 90% of the peak performance for two industry-standard benchmark programs on these platforms with only 34% of the compilation time compared to the case for using all of the optimizations.


architectural support for programming languages and operating systems | 2000

Effective null pointer check elimination utilizing hardware trap

Motohiro Kawahito; Hideaki Komatsu; Toshio Nakatani

We present a new algorithm for eliminating null pointer checks from programs written in Javat. Our new algorithm is split into two phases. In the first phase, it moves null checks backward, and it is iterated for a few times with other optimizations to eliminate redundant null checks and maximize the effectiveness of other optimizations. In the second phase, it moves null checks forward and converts many null checks to hardware traps in order to minimize the execution cost of the remaining null checks. As a result, it eliminates many null checks effectively and exploits the maximum use of hardware traps. This algorithm has been implemented in the IBM cross-platform Java Just-in-Time (JIT) compiler. Our experimental results show that our approach improves performance by up to 71% for jBYTEmark and up to 10% for SPECjvm98 over the previously known best algorithm. They also show that it increases JIT compilation time by only 2.3%. Although we implemented our algorithm for Java, it is also applicable for other languages requiring null checking.


architectural support for programming languages and operating systems | 2006

A new idiom recognition framework for exploiting hardware-assist instructions

Motohiro Kawahito; Hideaki Komatsu; Takao Moriyama; Hiroshi Inoue; Toshio Nakatani

Modern processors support hardware-assist instructions (such as TRT and TROT instructions on IBM zSeries) to accelerate certain functions such as delimiter search and character conversion. Such special instructions have often been used in high performance libraries, but they have not been exploited well in optimizing compilers except for some limited cases. We propose a new idiom recognition technique derived from a topological embedding algorithm [4] to detect idiom patterns in the input program more aggressively than in previous approaches. Our approach can detect a pattern even if the code segment does not exactly match the idiom. For example, we can detect a code segment that includes additional code within the idiom pattern. We implemented our new idiom recognition approach based on the Java Just-In-Time (JIT) compiler that is part of the J9 Java Virtual Machine, and we supported several important idioms for special hardware-assist instructions on the IBM zSeries and on some models of the IBM pSeries. To demonstrate the effectiveness of our technique, we performed two experiments. The first one is to see how many more patterns we can detect compared to the previous approach. The second one is to see how much performance improvement we can achieve over the previous approach. For the first experiment, we used the Java Compatibility Kit (JCK) API tests. For the second one we used IBM XML parser, SPECjvm98, and SPCjbb2000. In summary, relative to a baseline implementation using exact pattern matching, our algorithm converted 75% more loops in JCK tests. We also observed significant performance improvement of the XML parser by 64%, of SPECjvm98 by 1%, and of SPECjbb2000 by 2% on average on a z990. Finally, we observed the JIT compilation time increases by only 0.32% to 0.44%.


Ibm Journal of Research and Development | 2004

Evolution of a java just-in-time compiler for IA-32 platforms

Toshio Suganuma; Takeshi Ogasawara; Kiyokuni Kawachiya; Mikio Takeuchi; Kazuaki Ishizaki; Akira Koseki; Tatsushi Inagaki; Toshiaki Yasue; Motohiro Kawahito; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

JavaTM has gained widespread popularity in the industry, and an efficient Java virtual machine (JVMTM) and just-in-time (JIT) compiler are crucial in providing high performance for Java applications. This paper describes the design and implementation of our JIT compiler for IA-32 platforms by focusing on the recent advances achieved in the past several years. We first present the dynamic optimization framework, which focuses the expensive optimization efforts only on performance-critical methods, thus helping to manage the total compilation overhead. We then describe the platform-independent features, which include the conversion from the stack-semantic Java bytecode into our register-based intermediate representation (IR) and a variety of aggressive optimizations applied to the IR. We also present some techniques specific to the IA-32 used to improve code quality, especially for the efficient use of the small number of registers on that platform. Using several industry-standard benchmark programs, the experimental results show that our approach offers high performance with low compilation overhead. Most of the techniques presented here are included in the IBM JIT compiler product, integrated into the IBM Development Kit for Microsoft Windows®, Java Technology Edition Version 1.4.0.


Software - Practice and Experience | 2004

Partial redundancy elimination for access expressions by speculative code motion

Motohiro Kawahito; Hideaki Komatsu; Toshio Nakatani

We present a new memory access optimization for Java to perform aggressive code motion for speculatively optimizing memory accesses by applying partial redundancy elimination (PRE) techniques. First, to reduce as many barriers as possible and to enhance code motion, we perform alias analysis to identify all the regions in which each object reference is not aliased. Secondly, we find all the possible barriers. Finally, we perform code motions in three steps. For the first step, we apply a non‐speculative PRE algorithm to move load instructions and their following instructions in the backwards direction of the control flow graph. For the second step, we apply a speculative PRE algorithm to move some of them aggressively before the conditional branches. For the third step, we apply our modified version of a non‐speculative PRE algorithm to move store instructions in the forward direction of the control flow graph and to even move some of them after the merge points. We implemented our new algorithm in our production‐level Java just‐in‐time compiler. Our experimental results show that our speculative algorithm improves the average (maximum) performance by 13.1% (90.7%) for jBYTEmark and 1.4% (4.4%) for SPECjvm98 over the fastest algorithm previously described, while it increases the average (maximum) compilation time by 0.9% (2.9%) for both benchmark suites. Copyright


ACM Transactions on Programming Languages and Systems | 2006

Effective sign extension elimination for java

Motohiro Kawahito; Hideaki Komatsu; Toshio Nakatani

Computer designs are shifting from 32-bit architectures to 64-bit architectures, while most of the programs available today are still designed for 32-bit architectures. Java, for example, specifies the frequently used “int” as a 32-bit signed integer. If such Java programs are executed on a 64-bit architecture, many 32-bit signed integers must be sign-extended to 64-bit signed integers for correct operations. This causes serious performance overhead. In this article, we present a fast and effective algorithm for eliminating sign extensions. We implemented this algorithm in the IBM Java Just-in-Time (JIT) compiler for IA-64. Our experimental results show that our algorithm effectively eliminates the majority of sign extensions. They also show that it improves performance by 6.9% for jBYTEmark and 3.3% for SPECjvm98 over the previously known best algorithm, while it increases JIT compilation time by only 0.11%.

Researchain Logo
Decentralizing Knowledge