Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Young Ik Eom is active.

Publication


Featured researches published by Young Ik Eom.


design, automation, and test in europe | 2009

KAST: K-Associative Sector Translation for NAND flash memory in real-time systems

Hyunjin Cho; Dongkun Shin; Young Ik Eom

Flash memory is a good candidate for the storage device in real-time systems due to its non-fluctuating performance, low power consumption and high shock resistance. However, the garbage collection for invalid pages in flash memory can invoke a long blocking time. Moreover, the worst-case blocking time is significantly long compared to the best-case blocking time under the current flash management techniques. In this paper, we propose a novel flash translation layer (FTL), called KAST, where user can configure the maximum log block associativity to control the worst-case blocking time. Performance evaluation using simulations shows that the overall performance of KAST is better than the current FTL schemes as well as KAST guarantees the longest block time is shorter than the specified value.


IEEE Transactions on Consumer Electronics | 2014

Reducing excessive journaling overhead with small-sized NVRAM for mobile devices

Jung-Hoon Kim; Changwoo Min; Young Ik Eom

Journaling techniques are widely used to guarantee file system consistency of battery-powered mobile devices such as smartphones and tablets. In a journaling file system, system recovery is facilitated by first writing updated data blocks to a journal area and then periodically writing them to their home locations. However, these duplicated writes degrade the performance and shorten the lifetime of NAND flash storage in mobile devices. In particular, a lightweight database library, which is mainly used to manage application data in mobile devices, is a major cause of excessive journaling because it frequently triggers the costly file synchronization to guarantee the atomicity of transactional execution and thus generates a significant amount of synchronous random write traffic. This paper presents a novel journaling scheme, called Delta Journaling (DJ), to resolve this problem efficiently by using small-sized nonvolatile random access memory (NVRAM). DJ is based on a unique update pattern found in mobile devices, where file system updates are mostly very small. By exploiting the byteaddressable and the nonvolatile characteristics of NVRAM, DJ stores a journal block as a compressed delta in the smallsized NVRAM only when the compressed delta is small enough. Experimental results show that DJ outperforms a traditional journaling file system by up to 16.8 times for synthetic workloads. For a real-world workload, it enhances transaction throughput by 25.5% and 29.2% in ordered and journal modes, respectively, with only 16 MB NVRAM. Also, DJ enhances the lifetime of NAND flash storage by eliminating almost all journal writes without any loss of reliability.


grid computing | 2012

VMMB: Virtual Machine Memory Balancing for Unmodified Operating Systems

Changwoo Min; Inhyeok Kim; Taehyoung Kim; Young Ik Eom

Virtualization technology has been widely adopted in Internet hosting centers and cloud-based computing services, since it reduces the total cost of ownership by sharing hardware resources among virtual machines (VMs). In a virtualized system, a virtual machine monitor (VMM) is responsible for allocating physical resources such as CPU and memory to individual VMs. Whereas CPU and I/O devices can be shared among VMs in a time sharing manner, main memory is not amendable to such multiplexing. Moreover, it is often the primary bottleneck in achieving higher degrees of consolidation. In this paper, we present VMMB (Virtual Machine Memory Balancer), a novel mechanism to dynamically monitor the memory demand and periodically re-balance the memory among the VMs. VMMB accurately measures the memory demand with low overhead and effectively allocates memory based on the memory demand and the QoS requirement of each VM. It is applicable even to guest OS whose source code is not available, since VMMB does not require modifying guest kernel. We implemented our mechanism on Linux and experimented on synthetic and realistic workloads. Our experiments show that VMMB can improve performance of VMs that suffers from insufficient memory allocation by up to 3.6 times with low performance overhead (below 1%) for monitoring memory demand.


IEEE Transactions on Consumer Electronics | 2013

Virtual memory partitioning for enhancing application performance in mobile platforms

Geunsik Lim; Changwoo Min; Young Ik Eom

Recently, the amount of running software on smart mobile devices is gradually increasing due to the introduction of application stores. The application store is a type of digital distribution platform for application software, which is provided as a component of an operating system on a smartphone or tablet. Mobile devices have limited memory capacity and, unlike server and desktop systems, due to their mobility they do not have a memory slot that can expand the memory capacity. Low memory killer (LMK) and out-of-memory killer (OOMK) are widely used memory management solutions in mobile systems. They forcibly terminate applications when the available physical memory becomes insufficient. In addition, before the forced termination, the memory shortage incurs thrashing and fragmentation, thus slowing down application performance. Although the existing page reclamation mechanism is designed to secure available memory, it could seriously degrade user responsiveness due to the thrashing. Memory management is therefore still important especially in mobile devices with small memory capacity. This paper presents a new memory partitioning technique that resolves the deterioration of the existing application life cycle induced by LMK and OOMK. It provides a completely isolated virtual memory node at the operating system level. Evaluation results demonstrate that the proposed method improves application execution time under memory shortage, compared with methods in previous studies.


international performance computing and communications conference | 2008

Adaptive Access Control Scheme Utilizing Context Awareness in Pervasive Computing Environments

Jung Hwan Choi; Dong Hyun Kang; Hyunsu Jang; Young Ik Eom

In pervasive computing environments, where various types of information are publicly owned, and multiple users access the networks via various networked devices anytime and anywhere, access control that grants permission to an authorized user is definitely needed for secure information access. Context awareness refers to the idea that computers can both sense and react based on various context in their environments. In many access control schemes, recently, context awareness has been utilized to guarantee dynamic access control according to current context and various access control schemes utilizing context awareness have been proposed. However, previous studies have difficulty describing conditions for assigning roles and modifying permissions. They also simply consider assigning roles or modifying permissions, rather than providing detailed access control algorithms such as role delegation or role revocation. In this paper, we propose an adaptive access control scheme utilizing context awareness in pervasive computing environments. We design an adaptive access control model based on traditional RBAC(Role-Based Access Control) model, and present an adaptive access control scheme to guarantee dynamic user and permission assignment according to changes of context. In this scheme, we define context requirements in each table, enabling a more detailed description. We also guarantee dynamic access control via various access control algorithms.


international conference on parallel architectures and compilation techniques | 2013

DANBI: dynamic scheduling of irregular stream programs for many-core systems

Changwoo Min; Young Ik Eom

The stream programming model has received a lot of interest because it naturally exposes task, data, and pipeline parallelism. However, most prior work has focused on static scheduling of regular stream programs. Therefore, irregular applications cannot be handled in static scheduling, and the load imbalance caused by static scheduling faces scalability limitations in many-core systems. In this paper, we introduce the DANBI1 programming model which supports irregular stream programs and propose dynamic scheduling techniques. Scheduling irregular stream programs is very challenging and the load imbalance becomes a major hurdle to achieve scalability. Our dynamic load-balancing scheduler exploits producer-consumer relationships already expressed in the stream program to achieve scalability. Moreover, it effectively avoids the thundering-herd problem and dynamically adapts to load imbalance in a probabilistic manner. It surpasses prior static stream scheduling approaches which are vulnerable to load imbalance and also surpasses prior dynamic stream scheduling approaches which have many restrictions on supported program types, on the scope of dynamic scheduling, and on preserving data ordering. Our experimental results on a 40-core server show that DANBI achieves an almost linear scalability and outperforms state-of-the-art parallel runtimes by up to 2.8 times.


international conference on control, automation and systems | 2010

Concurrent Multipath Transfer using SCTP multihoming over heterogeneous network paths

Taehun Kim; Jongwook Lee; Young Ik Eom

The number of multi-homed hosts with multiple network interfaces has been increasing due to cheaper network interface devices and network diversification. Concurrent Multipath Transfer (CMT) using SCTP multihoming is a mechanism that simultaneously transmits data via multiple end-to-end paths compatible with todays network environment. Present studies on CMT have assumed that each path uses a homogeneous network. In this work, CMT performance was examined when each path of a multi-homed host used a heterogeneous network. The performance of CMT was found to decline due to the different bandwidths of the paths. Existing receive buffer blocking was demonstrated through the use of modeling with a time-line diagram. To prevent degradation in CMT performance, a dynamic CMT enable algorithm was proposed to transmit data through only one path when CMT performance is low.


ieee conference on mass storage systems and technologies | 2015

Improving performance by bridging the semantic gap between multi-queue SSD and I/O virtualization framework

Tae Yong Kim; Dong Hyun Kang; Dongwoo Lee; Young Ik Eom

Virtualization has become one of the most helpful techniques, and today it is prevalent in several computing environments including desktops, data-centers, and enterprises. However, an I/O scalability issue in virtualized environments still needs to be addressed because I/O layers are implemented to be oblivious to the I/O behaviors on virtual machines (VM). In particular, when a multi-queue solid state drive (SSD) is used as a secondary storage, each VM reveals semantic gap that degrades the overall performance of the VM by up to 74%. This is due to two key problems. First, the multi-queue SSD accelerates the possibility of lock contentions. Second, even though both the host machine and the multi-queue SSD provide multiple I/O queues for I/O parallelism, existing Virtio-Blk-Data-Plane supports only one I/O queue by an I/O thread for submitting all I/O requests. In this paper, we propose a novel approach, including the design of virtual CPU (vCPU)-dedicated queues and I/O threads, which efficiently distributes the lock contentions and addresses the parallelism issue of Virtio-Blk-Data-Plane in virtualized environments. We design our approach based on the above principle, which allocates a dedicated queue and an I/O thread for each vCPU to reduce the semantic gap. We also implement our approach based on Linux 3.17, and modify both the Virtio-Blk frontend driver of guest OS and the Virtio-Blk backend driver of Quick Emulator (QEMU) 2.1.2. Our experimental results with various I/O traces clearly show that our design improves the I/O operations per second (IOPS) in virtualized environments by up to 167% over existing QEMU.


IEEE Transactions on Consumer Electronics | 2015

Effective flash-based SSD caching for high performance home cloud server

Dongwoo Lee; Changwoo Min; Young Ik Eom

In the home cloud environment, the storage performance of home cloud servers, which govern connected devices and provide resources with virtualization features, is critical to improve the end-user experience. To improve the storage performance of virtualized home cloud servers in a cost-effective manner, caching schemes using flash-based solid state drives (SSD) have been widely studied. Although previous studies successfully narrow the speed gap between memory and hard disk drives, they only focused on how to manage the cache space, but were less interested in how to use the cache space efficiently taking into account the characteristics of flash-based SSD. Moreover, SSD caching is used as a read-only cache due to two well-known limitations of SSD: slow write and limited lifespan. Since storage access in virtual machines is performed in a more complex and costly manner, the limitations of SSD affect more significantly the storage performance. This paper proposes a novel SSD caching scheme and virtual disk image format, named sequential virtual disk (SVD), for achieving high-performance home cloud environments. The proposed techniques are based on the workload characteristics, in which synchronous random writes dominate, while taking into consideration the characteristics of flash memory and storage stack of the virtualized systems. Unlike previous studies, SSD is used as a read-write cache in the proposed caching scheme to effectively mitigate the performance degradation of synchronous random writes. The prototype was evaluated with some realistic workloads, through which the developed scheme was shown to allow improvement of the storage access performance by 21% to 112%, with reduction in the number of erasures on SSD by about 56% on average.


international conference on consumer electronics | 2014

Reducing excessive journaling overhead in mobile devices with small-sized NVRAM

Jung-Hoon Kim; Changwoo Min; Young Ik Eom

The excessive journaling degrades the performance and shortens the lifetime of NAND flash storage in mobile devices. We propose a novel journaling scheme that resolves this problem by using small-sized NVRAM efficiently. Experimental results show that our proposed scheme outperforms EXT4 by up to 16.8 times for synthetic workloads. Also, for TPC-C SQLite benchmark, it enhances the transaction throughput by 20% and reduces the number of journal writes by 58% with only 16 MB NVRAM.

Collaboration


Dive into the Young Ik Eom's collaboration.

Researchain Logo
Decentralizing Knowledge