Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sangyong Han is active.

Publication


Featured researches published by Sangyong Han.


international conference on parallel processing | 1994

Compilation of a Functional Language for the Multithreaded Architecture: DAVRID

Eunha Rho; Sangho Ha; Sangyong Han; Heunghwan Kim; Dae-Joon Hwang

Multithreading has been much focused on as one of the strongest parallel instruction execution paradigms for massively parallel processing. In this paper, we describe the compilation processes of parallel programming language Id- for a multithreaded architecture DAVRID(DAtaflow Von Neumann RISC hybrlD). Two fundamental issues in parallel processing, tolerance to communication latency and inexpensive synchronization, are solved by a compiler-controlled multithreading. Our compiler features a simple mechanism for handling closures, and a loop unfolding technique for handling parallel and sequential loops in separate, which can greatly contribute to parallel execution of loops.


international conference on computer design | 1994

A massively parallel multithreaded architecture: DAVRID

Sangho Ha; Junghwan Kim; Eunha Rho; Yoonhee Nah; Sangyong Han; Dae-Joon Hwang; Heunghwan Kim; Seung Ho Cho

MPAs (massively parallel architectures) should address two fundamental issues for scalability: synchronization and communication latency. Dataflow architectures cause problems of excessive synchronization costs and inefficient execution of sequential programs while they offer the ability to exploit massive parallelism inherent in programs. In contrast, MPAs based on the von Neumann computational model may suffer from inefficient synchronization mechanism and communication latencies. DAVRID (Dataflow von Neumann, RISC Hybrid) is a massively parallel multithreaded architecture. By combining the advantages of the von Neumann model and the dataflow model, DAVRID preserves good single thread performance and tolerates latency and synchronization costs. We describe the DAVRID architecture and evaluate it through simulation results over several benchmarks.<<ETX>>


international conference on parallel processing | 1993

Exploiting Spatial and Temporal Parallelism in the Multithreaded Node Architecture Implemented on Superscalar RISC Processors

Dae-Joon Hwang; Seung Ho Cho; Y. D. Kim; Sangyong Han

In most multithreaded node architectures moti¿ vated by the dataflow computational model, spatial parallelism could not be exploited at the thread level due to the resource deficit incurred by their inter nal organization. So we proposed a node architecture exploiting both spatial and temporal parallelism of a program. A multi-port non-blocking data cache is in corporated into our design to cope with the excessive data bandwidth required in parallel execution of mul tiple threads. The proposed node architecture may contribute to greatly reducing communication latency through the interconnection network. Simulation re sults show that parallel loops can be executed on this architecture more efficiently than on other competi tive ones.


international symposium on parallel architectures algorithms and networks | 1999

MPI backend for an automatic parallelizing compiler

Daesuk Kwon; Sangyong Han; Heunghwan Kim

Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.


hawaii international conference on system sciences | 1995

Partitioning a lenient parallel language into sequential threads

Sangho Ha; Sangyong Han; Heunghwan Kim

Multithreading is attractive in a large-scale parallel system since it allows split-phase memory operations and fast context switching between computations without blocking the processor. The performance of multithreaded architectures depends significantly on the quality of multithreaded codes. In this paper, we describe an enhanced thread formation scheme to produce efficient sequential threads from programs written in Id/sup -/, a lenient parallel language. This scheme features graph partitioning based only on long latency instructions, a combination of multiple switches and merges introducing a generalized switch-and-merge, thread merging, and redundant arc elimination using thread precedence relations. Simulation results show that our scheme reduces control and branch instructions effectively.<<ETX>>


Journal of electrical engineering and information science | 1999

Performance Enhancement in Distributed Multithreaded Loop Execution with Shared I-Structure Access

Eunha Rho; Sangyong Han; Heunghwan Kim


Journal of electrical engineering and information science | 2000

A Non-Strict Data Access Model for Multithreaded Loop Execution

Eunha Rho; Sangyong Han; Heunghwan Kim


Journal of KIISE:Computer Systems and Theory | 2000

A Communication and Computation Overlapping Model through Loop Sub-partitioning and Dynamic Scheduling in Data Parallel Programs

Junghwan Kim; Sangyong Han; Seung Ho Cho; Heunghwan Kim


Journal of KIISE:Computer Systems and Theory | 2000

Backend of a Parallelizing Compiler for an Heterogeneous Parallel System

Dae-Suk Kwon; Hsung-Hwan Kim; Sangyong Han


Journal of electrical engineering and information science | 1996

Design and Implementation of a Massively Parallel Multithreaded Architecture: DAVRID

Sangho Ha; Junghwan Kim; Eunha Rho; Yoonhee Nah; Sangyong Han; Dae-Joon Hwang; Heunghwan Kim; Seung Ho Cho

Collaboration


Dive into the Sangyong Han's collaboration.

Top Co-Authors

Avatar

Heunghwan Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Eunha Rho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sangho Ha

Soonchunhyang University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seung Ho Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Junghwan Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Yoonhee Nah

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Daesuk Kwon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Y. D. Kim

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge