Sangyong Han
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sangyong Han.
international conference on parallel processing | 1994
Eunha Rho; Sangho Ha; Sangyong Han; Heunghwan Kim; Dae-Joon Hwang
Multithreading has been much focused on as one of the strongest parallel instruction execution paradigms for massively parallel processing. In this paper, we describe the compilation processes of parallel programming language Id- for a multithreaded architecture DAVRID(DAtaflow Von Neumann RISC hybrlD). Two fundamental issues in parallel processing, tolerance to communication latency and inexpensive synchronization, are solved by a compiler-controlled multithreading. Our compiler features a simple mechanism for handling closures, and a loop unfolding technique for handling parallel and sequential loops in separate, which can greatly contribute to parallel execution of loops.
international conference on computer design | 1994
Sangho Ha; Junghwan Kim; Eunha Rho; Yoonhee Nah; Sangyong Han; Dae-Joon Hwang; Heunghwan Kim; Seung Ho Cho
MPAs (massively parallel architectures) should address two fundamental issues for scalability: synchronization and communication latency. Dataflow architectures cause problems of excessive synchronization costs and inefficient execution of sequential programs while they offer the ability to exploit massive parallelism inherent in programs. In contrast, MPAs based on the von Neumann computational model may suffer from inefficient synchronization mechanism and communication latencies. DAVRID (Dataflow von Neumann, RISC Hybrid) is a massively parallel multithreaded architecture. By combining the advantages of the von Neumann model and the dataflow model, DAVRID preserves good single thread performance and tolerates latency and synchronization costs. We describe the DAVRID architecture and evaluate it through simulation results over several benchmarks.<<ETX>>
international conference on parallel processing | 1993
Dae-Joon Hwang; Seung Ho Cho; Y. D. Kim; Sangyong Han
In most multithreaded node architectures moti¿ vated by the dataflow computational model, spatial parallelism could not be exploited at the thread level due to the resource deficit incurred by their inter nal organization. So we proposed a node architecture exploiting both spatial and temporal parallelism of a program. A multi-port non-blocking data cache is in corporated into our design to cope with the excessive data bandwidth required in parallel execution of mul tiple threads. The proposed node architecture may contribute to greatly reducing communication latency through the interconnection network. Simulation re sults show that parallel loops can be executed on this architecture more efficiently than on other competi tive ones.
international symposium on parallel architectures algorithms and networks | 1999
Daesuk Kwon; Sangyong Han; Heunghwan Kim
Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.
hawaii international conference on system sciences | 1995
Sangho Ha; Sangyong Han; Heunghwan Kim
Multithreading is attractive in a large-scale parallel system since it allows split-phase memory operations and fast context switching between computations without blocking the processor. The performance of multithreaded architectures depends significantly on the quality of multithreaded codes. In this paper, we describe an enhanced thread formation scheme to produce efficient sequential threads from programs written in Id/sup -/, a lenient parallel language. This scheme features graph partitioning based only on long latency instructions, a combination of multiple switches and merges introducing a generalized switch-and-merge, thread merging, and redundant arc elimination using thread precedence relations. Simulation results show that our scheme reduces control and branch instructions effectively.<<ETX>>
Journal of electrical engineering and information science | 1999
Eunha Rho; Sangyong Han; Heunghwan Kim
Journal of electrical engineering and information science | 2000
Eunha Rho; Sangyong Han; Heunghwan Kim
Journal of KIISE:Computer Systems and Theory | 2000
Junghwan Kim; Sangyong Han; Seung Ho Cho; Heunghwan Kim
Journal of KIISE:Computer Systems and Theory | 2000
Dae-Suk Kwon; Hsung-Hwan Kim; Sangyong Han
Journal of electrical engineering and information science | 1996
Sangho Ha; Junghwan Kim; Eunha Rho; Yoonhee Nah; Sangyong Han; Dae-Joon Hwang; Heunghwan Kim; Seung Ho Cho