Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sam H. Noh is active.

Publication


Featured researches published by Sam H. Noh.


IEEE Transactions on Consumer Electronics | 2002

A space-efficient flash translation layer for CompactFlash systems

Jesung Kim; Jong Min Kim; Sam H. Noh; Sang Lyul Min; Yookun Cho

Flash memory is becoming increasingly important as nonvolatile storage for mobile consumer electronics due to its low power consumption and shock resistance. However, it imposes technical challenges in that a write should be preceded by an erase operation, and that this erase operation can be performed only in a unit much larger than the write unit. To address these technical hurdles, an intermediate software layer called a flash translation layer (FTL) is generally employed to redirect logical addresses from the host system to physical addresses in flash memory. Previous approaches have performed this address translation at the granularity of either a write unit (page) or an erase unit (block). We propose a novel FTL design that combines the two different granularities in address translation. This is motivated by the idea that coarse grain address translation lowers the resources required to maintain translation information, which is crucial in mobile consumer products for cost and power consumption reasons, while fine grain address translation is efficient in handling small size writes. Performance evaluation based on trace-driven simulation shows that the proposed scheme significantly outperforms previously proposed approaches.


IEEE Transactions on Computers | 2001

LRFU: a spectrum of policies that subsumes the least recently used and least frequently used policies

Donghee Lee; Jongmoo Choi; Jong-Hun Kim; Sam H. Noh; Sang Lyul Min; Yookun Cho; Chong Sang Kim

Efficient and effective buffering of disk blocks in main memory is critical for better file system performance due to a wide speed gap between main memory and hard disks. In such a buffering system, one of the most important design decisions is the block replacement policy that determines which disk block to replace when the buffer is full. In this paper, we show that there exists a spectrum of block replacement policies that subsumes the two seemingly unrelated and independent Least Recently Used (LRU) and Least Frequently Used (LFU) policies. The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older history. We also show that there is a spectrum of implementations of the LRFU that again subsumes the LRU and LFU implementations. This spectrum is again dictated by how much weight is given to recent and older histories and the time complexity of the implementations lies between O(1) (the time complexity of LRU) and {\rm O}(\log_2 n) (the time complexity of LFU), where n is the number of blocks in the buffer. Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads we considered.


measurement and modeling of computer systems | 1999

On the existence of a spectrum of policies that subsumes the least recently used (LRU) and least frequently used (LFU) policies

Donghee Lee; Jongmoo Choi; Jong-Hun Kim; Sam H. Noh; Sang Lyul Min; Yookun Cho; Chong Sang Kim

Sam H. Nohs Sang Lyul Mint t Department of Computer Engineering Seoul National University Seoul 151-742, Korea http://ssrnet.snu.ac.kr http://archi.snu.ac.kr We show that there exists a spectrum of block replacement policies that subsumes both the Least Recently Used (LRU) and the Least Frequently Used (LFU) policies. The spectrum is formed according to how much more weight we give to the recent history than to the older history, and is referred to as the LRFU (Least Recently/Frequently Used) policy. Unlike many previous policies that use limited history to make block replacement decisions, the LRFU policy uses the complete reference history of blocks recorded during their cache residency. Nevertheless, the LRFU requires only a few words for each block to maintain such history. This paper also describes an implementation of the LRFU that again subsumes the LRU and LFU implementations. The LRFU policy is applied to buffer caching, and results from trace-driven simulations show that the LRFU performs better than previously known policies for the workloads we considered. This point is reinforced by results from our integration of the LRFU into the FreeBSD operating system.


embedded software | 2009

Disk schedulers for solid state drivers

Jaeho Kim; Yongseok Oh; Eunsam Kim; Jongmoo Choi; Donghee Lee; Sam H. Noh

In embedded systems and laptops, flash memory storage such as SSDs (Solid State Drive) have been gaining popularity due to its low energy consumption and durability. As SSDs are flash memory based devices, their performance behavior differs from those of magnetic disks. However, little attention has been paid on how to exploit SSDs from the disk scheduling algorithm view point. In this paper, we first describe behaviors of SSDs that inspires us to design a new disk scheduler for the Linux operating system. Specifically, read service time is almost constant in an SSD while write service time is not. Moreover, appropriate grouping of write requests eliminates any ordering-related restrictions and also maximizes write performance. From these observations, we propose two disk schedulers: IRBW-FIFO and IRBW-FIFO-RP. Both schedulers arrange write requests into bundles of an appropriate size while read requests are independently scheduled. Then, the IRBW-FIFO scheduler provides complete FIFO ordering to each bundle of write requests and each individual read requests while the IRBW-FIFO-RP scheduler gives higher priority to read requests than the bundles of write requests. We implement these schedulers in Linux 2.6.23, and results of executing our set of benchmark programs shows that performance improvements of up to 17% compared to existing Linux disk schedulers are achieved.


measurement and modeling of computer systems | 2000

Towards application/file-level characterization of block references: a case for fine-grained buffer management

Jongmoo Choi; Sam H. Noh; Sang Lyul Min; Yookun Cho

Two contributions are made in this paper. First, we show that system level characterization of file block references is inadequate for maximizing buffer cache performance. We show that a finer-grained characterization approach is needed. Though application level characterization methods have been proposed, this is the first attempt, to the best of our knowledge, to consider file level characterizations. We propose an Application/File-level Characterization (AFC) scheme where we detect on-line the reference characteristics at the application level and then at the file level, if necessary. The results of this characterization are used to employ appropriate replacement policies in the buffer cache to maximize performance. The second contribution is in proposing an efficient and fair buffer allocation scheme. Application or file level resource management is infeasible unless there exists an allocation scheme that is efficient and fair. We propose the ΔHIT allocation scheme that takes away a block from the application/file where the removal results in the smallest reduction in the number of expected buffer cache hits. Both the AFC and ΔHIT schemes are on-line schemes that detect and allocate as applications execute. Experiments using trace-driven simulations show that substantial performance improvements can be made. For single application executions the hit ratio increased an average of 13 percentage points compared to the LRU policy, with a maximum increase of 59 percentage points, while for multiple application executions, the increase is an average of 12 percentage points, with a maximum of 32 percentage points for the workloads considered.


IEEE Transactions on Computers | 2014

CLOCK-DWF: A Write-History-Aware Page Replacement Algorithm for Hybrid PCM and DRAM Memory Architectures

Soyoon Lee; Hyokyung Bahn; Sam H. Noh

Phase change memory (PCM) has emerged as one of the most promising technologies to incorporate into the memory hierarchy of future computer systems. However, PCM has two critical weaknesses to substitute DRAM memory in its entirety. First, the number of write operations allowed to each PCM cell is limited. Second, write access time of PCM is about 6-10 times slower than that of DRAM. To cope with this situation, hybrid memory architectures that use a small amount of DRAM together with PCM have been suggested. In this paper, we present a new memory management technique for hybrid PCM and DRAM memory architecture that efficiently hides the slow write performance of PCM. Specifically, we aim to estimate future write references accurately and then absorb frequent memory writes into DRAM. To do this, we analyze the characteristics of memory write references and find two noticeable phenomena. First, using write history alone performs better than using both read and write history in estimating future write references. Second, the frequency characteristic is a better estimator than temporal locality in predicting future memory writes. Based on these two observations, we present a new page replacement algorithm called CLOCK-DWF (CLOCK with Dirty bits and Write Frequency) that significantly reduces the number of write operations that occur on PCM and also increases the lifespan of PCM memory.


embedded software | 2007

Exploiting non-volatile RAM to enhance flash file system performance

In Hwan Doh; Jongmoo Choi; Donghee Lee; Sam H. Noh

Non-volatile RAM (NVRAM) such as PRAM (Phase-change RAM), FeRAM (Ferroelectric RAM), and MRAM (Magnetoresistive RAM) has characteristics of both non-volatile storage and random access memory (RAM). These forms of NVRAM are currently being developed by major semiconductor companies and are expected to be an everyday component in the near future. The advent of NVRAM may possibly bring about drastic changes to the system software landscape. In this work, we develop a new Flash memory based file system that exploits NVRAM in order to improve system performance. Specifically, we discuss the initial design and implementation of a file system that stores all metadata in NVRAM, while storing all file data in Flash memory. In so doing, we make two contributions in this work. First, we present a model that analyzes the amount of NVRAM that is needed for specific Flash memory storage capacity. Experimentally, we verify that this model represents the exact NVRAM usage in the realistic environment. Second, we present quantitative experimental results that show how much performance gains are possible by exploiting NVRAM. Compared to YAFFS, a popular Flash memory based file system, we show that this file system requires only minimal time for mounting and that the execution time improves by a maximum of 600% and an average of 437% for the realistic workloads that we considered.


modeling, analysis, and simulation on computer and telecommunication systems | 2011

Characterizing Memory Write References for Efficient Management of Hybrid PCM and DRAM Memory

Soyoon Lee; Hyokyung Bahn; Sam H. Noh

In order to reduce the energy dissipation in main memory of computer systems, phase change memory (PCM) has emerged as one of the most promising technologies to incorporate into the memory hierarchy. However, PCM has two critical weaknesses to substitute DRAM memory in its entirety. First, the number of write operations allowed to each PCM cell is limited. Second, write access time of PCM is about 6-10 times slower than that of DRAM. To cope with this situation, hybrid memory architectures that use a small amount of DRAM together with PCM memory have been suggested. In this paper, we present a new memory management technique for hybrid PCM and DRAM memory architecture that efficiently hides the slow write performance of PCM. Specifically, we aim to estimate future write references accurately and then absorb most memory writes into DRAM. To do this, we analyze the characteristics of memory write references and find two noticeable phenomena. First, using write history alone performs better than using both read and write history in estimating future write references. Second, the frequency characteristic is a better estimator than temporal locality but combining these two properties appropriately leads to even better results. Based on these two observations, we present a new page replacement algorithm called CLOCK-DWF (CLOCK with Dirty bits and Write Frequency) that significantly reduces the number of write operations that occur on PCM.


embedded software | 2007

Block recycling schemes and their cost-based optimization in nand flash memory based storage system

Jongmin Lee; Sunghoon Kim; Hunki Kwon; Choulseung Hyun; Seongjun Ahn; Jongmoo Choi; Donghee Lee; Sam H. Noh

Flash memory has many merits such as light weight, shock resistance, and low power consumption, but also has limitations like the erase-before-write property. To overcome such limitations and to use it efficiently as storage media in mobile systems, Flash memory based storage systems require special address mapping software called the FTL (Flash-memory Translation Layer). Like cleaning in Log-structured file system (LFS), the FTL often performs a merge operation for block recycling and its efficiency affects the performance of the storage system. To reduce the block recycling costs in NAND Flash memory based storage, we introduce another block recycling scheme that we call migration. Our cost-models and experimental results show that cost-based selection of merge or migration for each block recycling can decrease block recycling costs and, therefore, improve performance of Flash memory based storage systems. Also, we derive the macroscopic optimal migration/merge sequence minimizing block recycling costs for each migration/merge combination period. Experimental results show that the performance of Flash memory based storage can be further improved by the macroscopic optimization than the simple cost-based selection.


architectural support for programming languages and operating systems | 2013

Regularities considered harmful: forcing randomness to memory accesses to reduce row buffer conflicts for multi-core, multi-bank systems

Heekwon Park; Seungjae Baek; Jongmoo Choi; Donghee Lee; Sam H. Noh

We propose a novel kernel-level memory allocator, called M3 (M-cube, Multi-core Multi-bank Memory allocator), that has the following two features. First, it introduces and makes use of a notion of a memory container, which is defined as a unit of memory that comprises the minimum number of page frames that can cover all the banks of the memory organization, by exclusively assigning a container to a core so that each core achieves bank parallelism as much as possible. Second, it orchestrates page frame allocation so that pages that threads access are dispersed randomly across multiple banks so that each threads access pattern is randomized. The development of M3 is based on a tool that we develop to fully understand the architectural characteristics of the underlying memory organization. Using an extension of this tool, we observe that the same application that accesses pages in a random manner outperforms one that accesses pages in a regular pattern such as sequential or same ordered accesses. This is because such randomized accesses reduces inter-thread access interference on the row-buffer in memory banks. We implement M3 in the Linux kernel version 2.6.32 on the Intel Xeon system that has 16 cores and 32GB DRAM. Performance evaluation with various workloads show that M3 improves the overall performance for memory intensive benchmarks by up to 85% with an average of about 40%.

Collaboration


Dive into the Sam H. Noh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donghee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sang Lyul Min

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yookun Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaeho Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge