Dong-kwan Suh
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dong-kwan Suh.
field-programmable technology | 2012
Dong-kwan Suh; Ki-seok Kwon; Suk-Jin Kim; Soojung Ryu; Jeongwook Kim
Coarse Grained Reconfigurable Architectures (CGRAs) have played a key role in the area of domain specific processors due to their programmability and runtime reconfigurability. The Coarse Grained Array (CGA) structure enables target designs to achieve high performance, but it is easy to fall into over-design in term of area. Moreover, the network overhead between the function units (FUs) seriously degrades its clock speed. In this paper, we propose a high performance CGRA that facilitates design space exploration (DSE) to reduce these overheads. It employs a concept of building blocks, named mini cores, to mitigate overhead involved in DSE that aims to achieve high clock speed and small area in the target design. The proposed approach reduces the design time more than 100 times compared with previous design. Experimental results show that the implemented architecture reduces logic area by 14.38% and improves clock frequency by 59.34% without performance loss.
international conference on consumer electronics | 2014
Tai-song Jin; Min-wook Ahn; Dong-hoon Yoo; Dong-kwan Suh; Yoonseo Choi; Do-Hyung Kim; Shihwa Lee
VLIW (Very Long Instruction Word) is one of the most popular architectures in embedded systems because it has features of low power consumption and low hardware cost. Due to the nature of VLIW architecture such as bundled instructions and large register files, VLIW processors are running with large size of instruction codes in relatively low clock frequency. However compact instruction size and high clock frequency are the most important requirements of modern embedded consumer electronics. In this paper we propose a novel instruction compression scheme to solve the addressed problem. The experiment shows that the proposed scheme can reduce instruction size by 23% and improve clock frequency by 25% in average comparing with conventional compression schemes.
ACM Transactions on Architecture and Code Optimization | 2018
Hochan Lee; Mansureh S. Moghaddam; Dong-kwan Suh; Bernhard Egger
Modulo-scheduled course-grain reconfigurable array (CGRA) processors excel at exploiting loop-level parallelism at a high performance per watt ratio. The frequent reconfiguration of the array, however, causes between 25% and 45% of the consumed chip energy to be spent on the instruction memory and fetches therefrom. This article presents a hardware/software codesign methodology for such architectures that is able to reduce both the size required to store the modulo-scheduled loops and the energy consumed by the instruction decode logic. The hardware modifications improve the spatial organization of a CGRA’s execution plan by reorganizing the configuration memory into separate partitions based on a statistical analysis of code. A compiler technique optimizes the generated code in the temporal dimension by minimizing the number of signal changes. The optimizations achieve, on average, a reduction in code size of more than 63% and in energy consumed by the instruction decode logic by 70% for a wide variety of application domains. Decompression of the compressed loops can be performed in hardware with no additional latency, rendering the presented method ideal for low-power CGRAs running at high frequencies. The presented technique is orthogonal to dictionary-based compression schemes and can be combined to achieve a further reduction in code size.
Archive | 2011
Dong-kwan Suh; Hyeong-Seok Yu; Suk-Jin Kim
Archive | 2010
Tai-song Jin; Dong-kwan Suh; Suk-Jin Kim
Archive | 2013
Dong-kwan Suh; Suk-Jin Kim; Hyeong-Seok Yu; Ki-seok Kwon; Jae-un Park
Archive | 2011
Dong-kwan Suh; Hyeong-Seok Yu; Suk-Jin Kim
Archive | 2017
Suk-Jin Kim; Dong-kwan Suh
Archive | 2017
Dong-kwan Suh; Suk-Jin Kim; Young-Hwan Park
Archive | 2015
Ki-seok Kwon; Min-wook Ahn; Dong-kwan Suh; Suk-Jin Kim