Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toshiaki Yasue is active.

Publication


Featured researches published by Toshiaki Yasue.


conference on object-oriented programming systems, languages, and applications | 2001

A dynamic optimization framework for a Java just-in-time compiler

Toshio Suganuma; Toshiaki Yasue; Motohiro Kawahito; Hideaki Komatsu; Toshio Nakatani

The high performance implementation of Java Virtual Machines (JVM) and just-in-time (JIT) compilers is directed toward adaptive compilation optimizations on the basis of online runtime profile information. This paper describes the design and implementation of a dynamic optimization framework in a production-level Java JIT compiler. Our approach is to employ a mixed mode interpreter and a three level optimizing compiler, supporting quick, full, and special optimization, each of which has a different set of tradeoffs between compilation overhead and execution speed. a lightweight sampling profiler operates continuously during the entire programs exectuion. When necessary, detailed information on runtime behavior is collected by dynmiacally generating instrumentation code which can be installed to and uninstalled from the specified recompilation target code. Value profiling with this instrumentation mechanism allows fully automatic code specialization to be performed on the basis of specific parameter values or global data at the highest optimization level. The experimental results show that our approach offers high performance and a low code expansion ratio in both program startup and steady state measurements in comparison to the compile-only approach, and that the code specialization can also contribute modest performance improvement


conference on object-oriented programming systems, languages, and applications | 2000

A study of devirtualization techniques for a Java Just-In-Time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Hideaki Komatsu; Toshio Nakatani

Many devirtualization techniques have been proposed to reduce the runtime overhead of dynamic method calls for various object-oriented languages, however, most of them are less effective or cannot be applied for Java in a straightforward manner. This is partly because Java is a statically-typed language and thus transforming a dynamic call to a static one does not make a tangible performance gain (owing to the low overhead of accessing the method table) unless it is inlined, and partly because the dynamic class loading feature of Java prohibits the whole program analysis and optimizations from being applied.We propose a new technique called direct devirtualization with the code patching mechanism. For a given dynamic call site, our compiler first determines whether the call can be devirtualized, by analyzing the current class hierarchy. When the call is devirtualizable and the target method is suitably sized, the compiler generates the inlined code of the method, together with the backup code of making the dynamic call. Only the inlined code is actually executed until our assumption about the devirtualization becomes invalidated, at which time the compiler performs code patching to make the backup code executed subsequently. Since the new technique prevents some code motions across the merge point between the inlined code and the backup code, we have furthermore implemented recently-known analysis techniques, such as type analysis and preexistence analysis, which allow the backup code to be completely eliminated. We made various experiments using 16 real programs to understand the effectiveness and characteristics of the devirtualization techniques in our Java Just-In-Time (JIT) compiler. In summary, we reduced the number of dynamic calls by ranging from 8.9% to 97.3% (the average of 40.2%), and we improved the execution performance by ranging from -1% to 133% (with the geometric mean of 16%).


Proceedings of the ACM 1999 conference on Java Grande | 1999

Design, implementation, and evaluation of optimizations in a just-in-time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Mikio Takeuchi; Takeshi Ogasawara; Toshio Suganuma; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

The Java language incurs a runtime overhead for exception checks and object accesses without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A “JustIn-Time” (JIT) compiler generates native code from Java byte code at runtime. It must improve the runtime performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for the JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and resolution of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98 and two JavaSoft applications with byte code sizes ranging from 20000 to 280000 bytes. Each optimization contributes to an improvement in the performance of the programs.


Concurrency and Computation: Practice and Experience | 2000

Design, implementation, and evaluation of optimizations in a JavaTM Just-In-Time compiler

Kazuaki Ishizaki; Motohiro Kawahito; Toshiaki Yasue; Mikio Takeuchi; Takeshi Ogasawara; Toshio Suganuma; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

The Java language incurs a runtime overhead for exception checks and object accesses, which are executed without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A ‘Just-In-Time’ (JIT) compiler generates native code from Java byte code at runtime. It must improve the runtime performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for a JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and devirtualization of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98, its candidates, and two JavaSoft applications with byte code sizes ranging from 23 000 to 280 000 bytes. Each optimization contributes to an improvement in the performance of the programs. Copyright


programming language design and implementation | 2003

A region-based compilation technique for a Java just-in-time compiler

Toshio Suganuma; Toshiaki Yasue; Toshio Nakatani

Method inlining and data flow analysis are two major optimization components for effective program transformations, however they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This paper describes the design and implementation of a region-based compilation technique in our dynamic compilation system, in which the compiled regions are selected as code portions without rarely executed code. The key part of this technique is the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify rare sections of code. The region selection process and method inlining decision are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, leading to more method inlining. Thus the inlining process can be performed for parts of a method, not for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then rely on on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that the approach of region-based compilation achieves approximately 5% performance improvement on average, while reducing the compilation overhead by 20 to 30%, in comparison to the traditional function-based compilation techniques.


conference on object oriented programming systems languages and applications | 2003

Effectiveness of cross-platform optimizations for a java just-in-time compiler

Kazuaki Ishizaki; Mikio Takeuchi; Kiyokuni Kawachiya; Toshio Suganuma; Osamu Gohda; Tatsushi Inagaki; Akira Koseki; Kazunori Ogata; Motohiro Kawahito; Toshiaki Yasue; Takeshi Ogasawara; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

This paper describes the system overview of our Java Just-In-Time (JIT) compiler, which is the basis for the latest production version of IBM Java JIT compiler that supports a diversity of processor architectures including both 32-bit and 64-bit modes, CISC, RISC, and VLIW architectures. In particular, we focus on the design and evaluation of the cross-platform optimizations that are common across different architectures. We studied the effectiveness of each optimization by selectively disabling it in our JIT compiler on three different platforms: IA-32, IA-64, and PowerPC. Our detailed measurements allowed us to rank the optimizations in terms of the greatest performance improvements with the smallest compilation times. The identified set includes method inlining only for tiny methods, exception check eliminations using forward dataflow analysis and partial redundancy elimination, scalar replacement for instance and class fields using dataflow analysis, optimizations for type inclusion checks, and the elimination of merge points in the control flow graphs. These optimizations can achieve 90% of the peak performance for two industry-standard benchmark programs on these platforms with only 34% of the compilation time compared to the case for using all of the optimizations.


ACM Transactions on Programming Languages and Systems | 2006

A region-based compilation technique for dynamic compilers

Toshio Suganuma; Toshiaki Yasue; Toshio Nakatani

Method inlining and data flow analysis are two major optimization components for effective program transformations, but they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This article describes the design and implementation of a region-based compilation technique in our dynamic optimization framework, in which the compiled regions are selected as code portions without rarely executed code. The key parts of this technique are the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify and eliminate rare sections of code. The region selection process and method inlining decisions are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, allowing more method inlining to be performed. The inlining process can be performed for parts of a method, not just for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then use on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that our region-based compilation approach achieves approximately 4% performance improvement on average, while reducing the compilation overhead by 10% to 30%, in comparison to the traditional method-based compilation techniques.


Ibm Journal of Research and Development | 2004

Evolution of a java just-in-time compiler for IA-32 platforms

Toshio Suganuma; Takeshi Ogasawara; Kiyokuni Kawachiya; Mikio Takeuchi; Kazuaki Ishizaki; Akira Koseki; Tatsushi Inagaki; Toshiaki Yasue; Motohiro Kawahito; Tamiya Onodera; Hideaki Komatsu; Toshio Nakatani

JavaTM has gained widespread popularity in the industry, and an efficient Java virtual machine (JVMTM) and just-in-time (JIT) compiler are crucial in providing high performance for Java applications. This paper describes the design and implementation of our JIT compiler for IA-32 platforms by focusing on the recent advances achieved in the past several years. We first present the dynamic optimization framework, which focuses the expensive optimization efforts only on performance-critical methods, thus helping to manage the total compilation overhead. We then describe the platform-independent features, which include the conversion from the stack-semantic Java bytecode into our register-based intermediate representation (IR) and a variety of aggressive optimizations applied to the IR. We also present some techniques specific to the IA-32 used to improve code quality, especially for the efficient use of the small number of registers on that platform. Using several industry-standard benchmark programs, the experimental results show that our approach offers high performance with low compilation overhead. Most of the techniques presented here are included in the IBM JIT compiler product, integrated into the IBM Development Kit for Microsoft Windows®, Java Technology Edition Version 1.4.0.


international conference on parallel architectures and compilation techniques | 2003

An efficient online path profiling framework for Java just-in-time compilers

Toshiaki Yasue; Toshio Suganuma; Hideaki Komatsu; Toshio Nakatani

Collecting hot paths is important for restructuring and optimizing the target program effectively. It is, however, challenging for just-in-time (JIT) compilers, which must collect path profiles on the fly at runtime. Here, we propose an efficient online path profiling technique, called structural path profiling (SPP), suitable for JIT compilers. The key idea is to partition the target method into a hierarchy of the nested graphs based on the loop structure, and then to profile each graph independently. With SPP, we can collect accurate path profiles efficiently with low overhead. The experimental results show that our technique can collect path profiles with an accuracy of around 90% compared to the offline complete path profiles, while it incurs only 2-3% overhead on average in the active profiling phase.


conference on object-oriented programming systems, languages, and applications | 2008

Performance pitfalls in large-scale java applications translated from COBOL

Toshio Suganuma; Toshiaki Yasue; Tamiya Onodera; Toshio Nakatani

There is a growing need to translate large-scale legacy mainframe applications from COBOL to Java. This is to transform the applications into modern Web-based services, without sacrificing the original programming investments. Most often, COBOL-to-Java translators are used first for the base program transformations, and then corrections and fine tuning are applied by hand to the resulting code. However, there are many serious performance problems that frequently appear in those Java programs translated from COBOL, and it is particularly difficult to identify problems hidden deeply in large-scale middleware applications. This paper describes the details of some performance pitfalls that easily slip into large-scale Java applications translated from COBOL using a translator, and that are primarily due to the impedance mismatch between the two languages. We classified those problems into four categories: eager object allocations, exceptions in normal control flows, reflections in common paths, and inappropriate use of the Java class library. Using large-scale production middleware, we present detailed evaluation results, showing how much overhead these problems can cause, both independently and collectively, in real-world scenarios. The work should be a step forward toward understanding the problems and building tools to generate Java programs that have comparable performance with corresponding COBOL programs.

Researchain Logo
Decentralizing Knowledge