Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jang-Soo Lee is active.

Publication


Featured researches published by Jang-Soo Lee.


international conference on computer design | 1999

Design and evaluation of a selective compressed memory system

Jang-Soo Lee; Won-Kee Hong; Shin-Dug Kim

This research explores any potential for an on-chip cache compression which can reduce not only cache miss ratio but also miss penalty, if main memory is also managed in compressed form. However, the decompression time causes a critical effect on the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed cache architecture. This paper suggests several techniques to reduce the decompression overhead and to manage the compressed blocks efficiently which include selective compression, fixed space allocation for the compressed blocks, parallel decompression, the use of a decompression buffer, and so on. Moreover a simple compressed cache architecture based on the above techniques and its management method are proposed. The results from trace-driven simulation show that this approach can provide around 35% decrease in the on-chip cache miss ratio as well as a 53% decrease in the data traffic over the conventional memory systems. Also, a large amount of the decompression overhead can be reduced, and thus the average memory access time can also be reduced by maximum 20% against the conventional memory systems.


Journal of Systems Architecture | 2000

An on-chip cache compression technique to reduce decompression overhead and design complexity

Jang-Soo Lee; Won-Kee Hong; Shin-Dug Kim

This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.


Journal of Systems Architecture | 2000

A new cache architecture based on temporal and spatial locality

Jung-Hoon Lee; Jang-Soo Lee; Shin-Dug Kim

Abstract A data cache system is designed as low power/high performance cache structure for embedded processors. Direct-mapped cache is a favorite choice for short cycle time, but suffers from high miss rate. Hence the proposed dual data cache is an approach to improve the miss ratio of direct-mapped cache without affecting this access time. The proposed cache system can exploit temporal and spatial locality effectively by maximizing the effective cache memory space for any given cache size. The proposed cache system consists of two caches, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size. Temporal locality is utilized by caching candidate small blocks selectively into the direct-mapped cache. Also spatial locality can be utilized aggressively by fetching multiple neighboring small blocks whenever a cache miss occurs. According to the results of comparison and analysis, similar performance can be achieved by using four times smaller cache size comparing with the conventional direct-mapped cache.And it is shown that power consumption of the proposed cache can be reduced by around 4% comparing with the victim cache configuration.


international conference on computer design | 2000

A selective temporal and aggressive spatial cache system based on time interval

Jung-Hoon Lee; Jang-Soo Lee; Shin-Dug Kim

This paper proposes a new cache system that can increase the effect by temporal and spatial locality by using only simple hardware control without any locality detection hardware or compiler aid. The proposed cache system consists of two caches with different associativities and different block sizes, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size as a multiple of small blocks. Therefore, the spatial locality can be exploited by aggressively fetching large blocks including any missed small block into the buffer, and the temporal locality can also be exploited by selectively storing small blocks that were referenced at the spatial buffer in the past. To determine the blocks to be stored at the direct-mapped cache, the proposed cache system uses a time interval-based selection mechanism. According to the simulation results, similar performance can be achieved by using four times smaller cache size compared with the conventional direct-mapped cache.


international conference on computer design | 2001

A banked-promotion TLB for high performance and low power

Jung-Hoon Lee; Jang-Soo Lee; Seh-Woong Jeong; Shin-Dug Kim

This research is to design a simple but high performance TLB (translation lookaside buffer) system with low power consumption. Thus, we propose a new TLB structure supporting two page sizes dynamically and selectively for high performance and low cost design without any operating system support. For high performance, a promotion-TLB is designed by supporting two page sizes. Also in order to attain low power consumption, a banked-TLB is constructed by dividing one fully associative TLB space into two sub fully associative TLBs. These two structures are integrated to form a banked-promotion TLB as a low power and high performance TLB structure for embedded processors. According to the results of comparison and analysis, a similar performance can be achieved by using fewer TLB entries and also energy dissipation can be reduced by around 50% compared with the fully associative TLB.


Microprocessors and Microsystems | 2002

Performance analysis of a selectively compressed memory system

Jang-Soo Lee; Shin-Dug Kim; Charles C. Weems

Abstract On-line data compression is a new alternative technique for improving memory system performance, which can increase both the effective memory space and the bandwidth of memory systems. However, decompression time accompanied by accessing compressed data may offset the benefits of compression. In this paper, a selectively compressed memory system (SCMS) based on a combination of selective compression and hiding of decompression overhead is proposed and analyzed. The architecture of an efficient compressed cache and its management policies are presented. Analytical modeling shows that the performance of SCMS is influenced by the compression efficiency, the percentage of references to the compressed data block, and the percentage of references found in the decompression buffer. The decompression buffer plays the most important role in improving the performance of the SCMS. If the decompression buffer can filter more than 70% of the references to the compressed blocks, the SCMS can significantly improve performance over conventional memory systems.


Proceedings 25th EUROMICRO Conference. Informatics: Theory and Practice for the New Millennium | 1999

A selective compressed memory system by on-line data decompressing

Jang-Soo Lee; Won-Kee Hong; Shin-Dug Kim

The article proposes a selective compressed memory system (SCMS) focusing on a compressed cache architecture, in which only data blocks with good compression efficiency are compressed selectively and all compressed blocks are stored in a fixed memory space. The selective compression technique can reduce decompression overhead caused by online data decompression and also the fixed memory space allocation allows efficient management of the compressed blocks. The results from a trace driven simulation show that the SCMS approach can provide around a 35% decrease in the on-chip cache miss ratio as well as a 53% decrease in the data traffic over conventional memory systems. Furthermore, a large amount of the decompression overhead can be reduced, and thus the average memory access time can also be reduced by a maximum 20% against conventional memory systems.


Focus on Powder Coatings | 2000

The cache memory system for CalmRISC32

Kil-Whan Lee; Jang-Soo Lee; Gi-Ho Park; Jung-Hoon Lee; Tack-Don Han; Shin-Dug Kim; Yong-Chun Kim; Seh-Woong Jung; Kwang-Yup Lee

The cache memory system for CalmRISC32 embedded processor is described in this paper. A dual data cache system structure called a cooperative cache that takes advantage of design flexibilities of a dual cache structure is used as the cache memory system for CalmRISC32 to improve performance and reduce power consumption. The cooperative cache system is applied to both data cache and instruction cache. This paper describes the structure and operational model of the cache memory system for CalmRISC32. The implementation of the cache memory system for CalmRISC32 is also presented.


Korean Journal of Chemical Engineering | 2016

Utillization of automobile shredder residue (ASR) as a reducing agent for the recovery of black copper

Won-Seok Yang; Ji Eun Lee; Yong-Chil Seo; Jang-Soo Lee; Heung-Min Yoo; Jun-Kyung Park; Se-Won Park; Hang Seok Choi; Ki-Bae Lee

The physicochemical characteristics of automobile shredder residue (ASR) and its melting slag were investigated: In particular, the applicability of ASR as a reducing agent to the black copper recovery process. ASR is classified into three types after the shredding process: heavy fluff, light fluff and glass and soil. In this study, the portions of heavy fluff, light fluff and glass and soil in the ASR were 89.2 wt%, 8.1 wt% and 2.7 wt%, respectively. Physicochemical analysis revealed that moisture and fixed carbon content were low in heavy and light fluffs, and combustible content was the highest. The higher heating value (HHV) of light fluff was 6,607 kcal/kg, and the HHV of heavy fluff was 5,312 kcal/kg. To sum up, the separation of black copper and discard slag mostly seems to be affected by the melting temperature. Therefore, if basicity and melting temperature are properly controlled, the ASR can be used as a reducing agent in the smelting process of black copper recovery. Moreover, the possibility of black copper recovery from ASR and heavy metal poisoning is evaluated.


midwest symposium on circuits and systems | 2000

A low-power cache system for embedded processors

Gi-Ho Park; Kil-Whan Lee; Jang-Soo Lee; Tack-Don Han; Shin-Dug Kim; Yong-Chun Kim; Seh-Woong Jeong; Kwang-Yup Lee

A low-power cache structure for embedded processors, called a cooperative cache system, is presented in this paper. The cooperative cache system reduces power consumption of the cache system by virtue of the structural characteristics that consists of two separate caches having different associativities and block sizes. The cooperative cache system is adopted as the cache structure for the CalmRISC-32 embedded processor, the prototype chip of which was recently manufactured with a 0.25 /spl mu/m, 4-metal process by Samsung Electronics Co.

Collaboration


Dive into the Jang-Soo Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jung-Hoon Lee

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge