Byung-Sun Yang
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Byung-Sun Yang.
international conference on parallel architectures and compilation techniques | 1999
Byung-Sun Yang; Soo-Mook Moon; Seong-Bae Park; Junpyo Lee; Seungil Lee; Jinpyo Park; Yoo C. Chung; Suhyun Kim; Kemal Ebcioglu; Erik R. Altman
For network computing on desktop machines, fast execution of Java bytecode programs is essential because these machines are expected to run substantial application programs written in Java. Higher Java performance can be achieved by just-in-time (JIT) compilers which translate the stack-based bytecode into register-based machine code on demand. One crucial problem in Java JIT compilation is how to map and allocate stack entries and local variables into registers efficiently and quickly, so as to improve the Java performance. This paper introduces LaTTe, a Java JIT compiler that performs fast and efficient register mapping and allocation for RISC machines. LaTTe first translates the bytecode into pseudo RISC code with symbolic registers, which is then register allocated while coalescing those copies corresponding to pushes and pops between local variables and the stack. The LaTTe JVM also includes an enhanced object model, a lightweight monitor, a fast mark-and-sweep garbage collector, and an on-demand exception handling mechanism, all of which are closely coordinated with LaTTes JIT compilation.
Proceedings of the ACM 2000 conference on Java Grande | 2000
Seungil Lee; Byung-Sun Yang; Suhyun Kim; Seong-Bae Park; Soo-Mook Moon; Kemal Ebcioglu; Erik R. Altman
The Java language provides exceptions in order to handle errors gracefully. However, the presence of exception handlers complicate the job of a JIT (Just-in-Time) compiler, including optimizations and register allocation, even though exceptions are rarely used in most programs. This paper describes some mechanisms for removing overheads imposed by the existence of exception handlers, including on-demand translation of exception handlers, which expose more optimization opportunities in normal flow. In addition, we also minimize the exception handling overhead for frequently thrown exceptions by jumping directly from the exception throwing point into the exception handler through a technique called exception handler prediction. Experiments show that the existence of exception handlers indeed does not interfere with the translation of normal flow using our exception handling mechanisms. Also, the results reveal that frequently thrown exceptions are efficiently handled with exception handler prediction.
ACM Sigarch Computer Architecture News | 2000
Junpyo Lee; Byung-Sun Yang; Suhyun Kim; Kemal Ebcioglu; Erik R. Altman; Seungil Lee; Yoo C. Chung; Heungbok Lee; Je-Hyung Lee; Soo-Mook Moon
Java, an object-oriented language, uses virtual methods to support the extension and reuse of classes. Unfortunately, virtual method calls affect performance and thus require an efficient implementation, especially when just-in-time (JIT) compilation is done. Inline caches and type feedback are solutions used by compilers for dynamically-typed object-oriented languages such as SELF [1, 2, 3], where virtual call overheads are much more critical to performance than in Java. With an inline cache, a virtual call that would otherwise have been translated into an indirect jump with two loads is translated into a simpler direct jump with a single compare. With type feedback combined with adaptive compilation, virtual methods can be inlined using checking code which verifies if the target method is equal to the inlined one.This paper evaluates the performance impact of these techniques in an actual Java virtual machine, which is our new open source Java VM JIT compiler called LaTTe [4]. We also discuss the engineering issues in implementing these techniques.Our experimental results with the SPECjvm98 benchhmarks indicate that while monomoprhic inline caches and polymorphic inline caches achieve a speedup as much as a geometric mean of 3% and 9% respectively, type feedback cannot improve further over polymorphic inline caches and even degrades the performance for some programs.
ACM Sigarch Computer Architecture News | 1999
Byung-Sun Yang; Junpyo Lee; Jinpyo Park; Soo-Mook Moon; Kemal Ebcioglu; Erik R. Altman
| This paper introduces the lightweight monitor in Java VM that is fast on single-threaded programs as well as on multi-threaded programs with little lock contention. A 32-bit lock is embedded into each object for eecient access while the lock queue and the wait set is managed through a hash table. The lock manipulation code is highly optimized and inlined by our Java VM JIT compiler called LaTTe wherever the lock is accessed. In most cases, only 9 SPARC instructions are spent for lock acquisition and 5 instructions for lock release. Our experimental results indicate that the lightweight monitor is faster than the monitor in the latest SUN JDK 1.2 Release Candidate 1 by up to 21 times in the absence of lock contention and by up to 7 times in the presence of lock contention.
ACM Transactions on Architecture and Code Optimization | 2017
Byung-Sun Yang; Jae-Yun Kim; Soo-Mook Moon
Java virtual machine (JVM) has recently evolved into a general-purpose language runtime environment to execute popular programming languages such as JavaScript, Ruby, Python, and Scala. These languages have complex non-Java features, including dynamic typing and first-class function, so additional language runtimes (engines) are provided on top of the JVM to support them with bytecode extensions. Although there are high-performance JVMs with powerful just-in-time (JIT) compilers, running these languages efficiently on the JVM is still a challenge. This article introduces a simple and novel technique for the JVM JIT compiler called exceptionization to improve the performance of JVM-based language runtimes. We observed that the JVM executing some non-Java languages encounters at least 2 times more branch bytecodes than Java, most of which are highly biased to take only one target. Exceptionization treats such a highly biased branch as some implicit exception-throwing instruction. This allows the JVM JIT compiler to prune the infrequent target of the branch from the frequent control flow, thus compiling the frequent control flow more aggressively with better optimization. If a pruned path were taken, then it would run like a Java exception handler, that is, a catch block. We also devised de-exceptionization, a mechanism to cope with the case when a pruned path is executed more often than expected. Since exceptionization is a generic JVM optimization, independent of any specific language runtime, it would be generally applicable to other language runtimes on the JVM. Our experimental result shows that exceptionization accelerates the performance of several non-Java languages. For example, JavaScript-on-JVM runs faster by as much as 60% and by 6% on average, when experimented with the Octane benchmark suite on Oracle’s latest Nashorn JavaScript engine and HotSpot 1.9 JVM. Furthermore, the performance of Ruby-on-JVM shows an improvement by as much as 60% and by 6% on average, while Python-on-JVM improves by as much as 6% and by 2% on average. We found that exceptionization is more effective to apply to the branch bytecode of the language runtime itself than the bytecode corresponding to the application code or the bytecode of the Java class libraries. This implies that the performance benefit of exceptionization comes from better JIT compilation of the language runtime of non-Java languages.
Archive | 2001
Heungbok Lee; Byung-Sun Yang; Soo-Mook Moon
In Java, an exception thrown in a try block can be handled in one of catch blocks given by the programmer. On exception, local variables must be preserved to be usable in the catch block, while operand stack is flushed. This error handling mechanism raises an interesting challenge, called local variable consistency problem, in implementing register allocation during JIT compilation. Because the register allocation for local variables should be consistent between a possibly exception generatable instruction (PEI) in a try block and catch blocks.
IEEE Transactions on Parallel and Distributed Systems | 2007
Byung-Sun Yang; Junpyo Lee; Seungil Lee; Seongbae Park; Yoo C. Chung; Suhyun Kim; Kemal Ebcioglu; Erik R. Altman; Soo-Mook Moon
Archive | 1999
Seungil Lee; Byung-Sun Yang; Kye-Sung Kim; Seongbae Park
Software - Practice and Experience | 2005
Byung-Sun Yang; Soo-Mook Moon; Kemal Ebcioglu
international conference on human-computer interaction | 1998
Byung-Sun Yang; Junpyo Lee; Kemal Ebcioglu; Jinpyo Park; Soo-Mook Moon