Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chong Sang Kim is active.

Publication


Featured researches published by Chong Sang Kim.


IEEE Transactions on Computers | 2001

LRFU: a spectrum of policies that subsumes the least recently used and least frequently used policies

Donghee Lee; Jongmoo Choi; Jong-Hun Kim; Sam H. Noh; Sang Lyul Min; Yookun Cho; Chong Sang Kim

Efficient and effective buffering of disk blocks in main memory is critical for better file system performance due to a wide speed gap between main memory and hard disks. In such a buffering system, one of the most important design decisions is the block replacement policy that determines which disk block to replace when the buffer is full. In this paper, we show that there exists a spectrum of block replacement policies that subsumes the two seemingly unrelated and independent Least Recently Used (LRU) and Least Frequently Used (LFU) policies. The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older history. We also show that there is a spectrum of implementations of the LRFU that again subsumes the LRU and LFU implementations. This spectrum is again dictated by how much weight is given to recent and older histories and the time complexity of the implementations lies between O(1) (the time complexity of LRU) and {rm O}(log_2 n) (the time complexity of LFU), where n is the number of blocks in the buffer. Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads we considered.


IEEE Transactions on Computers | 1998

Analysis of cache-related preemption delay in fixed-priority preemptive scheduling

Chang-Gun Lee; Hoosun Hahn; Yangmin Seo; Sang Lyul Min; Rhan Ha; Seongsoo Hong; Chang Yun Park; Minsuk Lee; Chong Sang Kim

We propose a technique for analyzing cache-related preemption delays of tasks that cause unpredictable variation in task execution time in the context of fixed-priority preemptive scheduling. The proposed technique consists of two steps. The first step performs a per-task analysis to estimate cache-related preemption cost for each execution point in a given task. The second step computes the worst case response time of each task that includes the cache-related preemption delay using a response time equation and a linear programming technique. This step takes as its input the preemption cost information of tasks obtained in the first step. This paper also compares the proposed approach with previous approaches. The results show that the proposed approach gives a prediction of the worst case cache-related preemption delay that is up to 60 percent tighter than those obtained from the previous approaches.


measurement and modeling of computer systems | 1999

On the existence of a spectrum of policies that subsumes the least recently used (LRU) and least frequently used (LFU) policies

Donghee Lee; Jongmoo Choi; Jong-Hun Kim; Sam H. Noh; Sang Lyul Min; Yookun Cho; Chong Sang Kim

Sam H. Nohs Sang Lyul Mint t Department of Computer Engineering Seoul National University Seoul 151-742, Korea http://ssrnet.snu.ac.kr http://archi.snu.ac.kr We show that there exists a spectrum of block replacement policies that subsumes both the Least Recently Used (LRU) and the Least Frequently Used (LFU) policies. The spectrum is formed according to how much more weight we give to the recent history than to the older history, and is referred to as the LRFU (Least Recently/Frequently Used) policy. Unlike many previous policies that use limited history to make block replacement decisions, the LRFU policy uses the complete reference history of blocks recorded during their cache residency. Nevertheless, the LRFU requires only a few words for each block to maintain such history. This paper also describes an implementation of the LRFU that again subsumes the LRU and LFU implementations. The LRFU policy is applied to buffer caching, and results from trace-driven simulations show that the LRFU performs better than previously known policies for the workloads we considered. This point is reinforced by results from our integration of the LRFU into the FreeBSD operating system.


IEEE Transactions on Software Engineering | 2001

Bounding cache-related preemption delay for real-time systems

Chang-Gun Lee; Kwangpo Lee; Joosun Hahn; Yangmin Seo; Sang Lyul Min; Rhan Ha; Seongsoo Hong; Chang Yun Park; Minsuk Lee; Chong Sang Kim

Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to the reloading of memory blocks that are replaced during preemption. This cache-related preemption delay poses a serious problem in realtime computing systems where predictability is of utmost importance. We propose an enhanced technique for analyzing and thus bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers the phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache-related preemption delay. This paper also compares the proposed technique with previous techniques using randomly generated task sets. The results show that the improvement on the worst-case response time prediction by the proposed technique over previous techniques ranges between 5 percent and 18 percent depending on the cache refill time when the task set utilization is 0.6. The results also show that as the cache refill time increases, the improvement increases, which indicates that accurate prediction of cache-related preemption delay by the proposed technique becomes increasingly important if the current trend of widening speed gap between the processor and main memory continues.


real-time systems symposium | 1996

Enhanced analysis of cache-related preemption delay in fixed-priority preemptive scheduling

Chang-Gun Lee; Joosun Hahn; Yangmin Seo; Sang Lyul Min; Rhan Ha; Seongsoo Hong; Chang Yun Park; Minsuk Lee; Chong Sang Kim

We propose an enhanced technique for analyzing, and thus bounding cache related preemption delay in fixed priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache related preemption delay. Second, the technique considers phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache related preemption delay. The paper also compares the proposed technique with previous techniques. The results show that the proposed technique gives up to 60% tighter prediction of the worst case response time than the previous techniques.


real-time systems symposium | 1995

Worst case timing analysis of RISC processors: R3000/R3010 case study

Yerang Hur; Young Hyun Bae; Sung-Soo Lim; Sung-Kwan Kim; Byung-Do Rhee; Sang Lyul Min; Chang Yun Park; Minsuk Lee; Heonshik Shin; Chong Sang Kim

This paper presents a case study of worst case timing analysis for a RISC processor. The target machine consists of the R3000 CPU and R3010 FPA (Floating Point Accelerator). This target machine is typical of a RISC system with pipelined execution units and cache memories. Our methodology is an extension of the existing timing schema. The extended timing schema provides means to reason about the execution time variation of a program construct by surrounding program constructs due to pipelined execution and cache memories of RISC processors. The main focus of this paper is on explaining the necessary steps for performing timing analysis of a given target machine within the extended timing schema framework. This paper also gives results from experiments using a timing tool for the target machine that is built based on the extended timing schema approach.


languages compilers and tools for embedded systems | 1998

Limited Preemptible Scheduling to Embrace Cache Memory in Real-Time Systems

Sheayun Lee; Chang-Gun Lee; Minsuk Lee; Sang Lyul Min; Chong Sang Kim

In multi-tasking real-time systems, inter-task cache interference due to preemptions degrades system performance and predictability, complicating system design and analysis. To address this problem, we propose a novel scheduling scheme, called LPS (Limited Preemptible Scheduling), that limits preemptions to predetermined points with small cache-related preemption costs. We also give an accompanying analysis method that determines the schedulability of a given task set under LPS. By limiting preemption points, the proposed LPS scheme reduces preemption costs and thus increases the system throughput. Experimental results show that LPS can increase schedulable utilization by more than 10 % and save processor time by up to 44 % as compared with a traditional fully preemptible scheduling scheme.


high performance computer architecture | 1995

U-cache: a cost-effective solution to synonym problem

Jesung Kim; Sang Lyul Min; Sanghoon Jeon; Byoungchu Ahn; Deog-Kyoon Jeong; Chong Sang Kim

This paper proposes a cost-effective solution to the synonym problem. In this proposed solution, a minimal hardware addition guarantees the correctness whereas the software counterpart helps improve the performance. The key to this proposed solution is an addition of a small physically-indexed cache called U-cache. The U-cache maintains the reverse translation information of the cache blocks that belong to un-aligned virtual pages only, where aligned means that the lower bits of the virtual page number match those of the corresponding physical page number. A U-cache, even with only one entry, ensures correct handling of synonyms. A simple software optimization in the form of page alignment, helps improve the performance. Performance evaluation based on ATUM traces shows that a U-cache, with only a few entries, performs almost as well as (in some cases outperforms) a fully-configured hardware-based solution when more than 95% of the pages are aligned.<<ETX>>


Proceedings of 11th IEEE Workshop on Real-Time Operating Systems and Software | 1994

Issues of advanced architectural features in the design of a timing tool

Byung-Do Rhee; Sang Lyul Min; Sung-Soo Lim; Heonshik Shin; Chong Sang Kim; Chang Yun Park

This paper describes a timing tool being developed by a real-time research group at Seoul National University. Our focus is on the issues resulting from advanced architectural features such as pipelined execution and cache memories found in many modern RISC-style processors. For each architectural feature we state the issues and explain our approach.<<ETX>>


Real-time Systems | 1999

Cache-Conscious Limited Preemptive Scheduling

Sheayun Lee; Sang Lyul Min; Chong Sang Kim; Chang-Gun Lee; Minsuk Lee

In multi-tasking real-time systems, inter-task cache interference due to preemptions degrades schedulability as well as performance. To address this problem, we propose a novel scheduling scheme, called limited preemptive scheduling (LPS), that limits preemptions to execution points with small cache-related preemption costs. Limiting preemptions decreases the cache-related preemption costs of tasks but increases blocking delay of higher priority tasks. The proposed scheme makes an optimal trade-off between these two factors to maximize the schedulability of a given task set while minimizing cache-related preemption delay of tasks. Experimental results show that the LPS scheme improves the maximum schedulable utilization by up to 40% compared with the traditional fully preemptive scheduling (FPS) scheme. The results also show that up to 20% of processor time is saved by the LPS scheme due to reduction in the cache-related preemption costs. Finally, the results show that both the improvement of schedulability and the saving of processor time by the LPS scheme increase as the speed gap between the processor and main memory widens.

Collaboration


Dive into the Chong Sang Kim's collaboration.

Top Co-Authors

Avatar

Sang Lyul Min

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang-Gun Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Heonshik Shin

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Yookun Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Donghee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jesung Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sam H. Noh

Ulsan National Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge