Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyeong Seog Kim is active.

Publication


Featured researches published by Hyeong Seog Kim.


ACM Transactions on Computer Systems | 2014

Optimizing the Block I/O Subsystem for Fast Storage Devices

Young Jin Yu; Dong In Shin; Woong Shin; Nae Young Song; Jae Woo Choi; Hyeong Seog Kim; Hyeonsang Eom; Heon Young Yeom

Fast storage devices are an emerging solution to satisfy data-intensive applications. They provide high transaction rates for DBMS, low response times for Web servers, instant on-demand paging for applications with large memory footprints, and many similar advantages for performance-hungry applications. In spite of the benefits promised by fast hardware, modern operating systems are not yet structured to take advantage of the hardware’s full potential. The software overhead caused by an OS, negligible in the past, adversely impacts application performance, lessening the advantage of using such hardware. Our analysis demonstrates that the overheads from the traditional storage-stack design are significant and cannot easily be overcome without modifying the hardware interface and adding new capabilities to the operating system. In this article, we propose six optimizations that enable an OS to fully exploit the performance characteristics of fast storage devices. With the support of new hardware interfaces, our optimizations minimize per-request latency by streamlining the I/O path and amortize per-request latency by maximizing parallelism inside the device. We demonstrate the impact on application performance through well-known storage benchmarks run against a Linux kernel with a customized SSD. We find that eliminating context switches in the I/O path decreases the software overhead of an I/O request from 20 microseconds to 5 microseconds and a new request merge scheme called Temporal Merge enables the OS to achieve 87% to 100% of peak device performance, regardless of request access patterns or types. Although the performance improvement by these optimizations on a standard SATA-based SSD is marginal (because of its limited interface and relatively high response times), our sensitivity analysis suggests that future SSDs with lower response times will benefit from these changes. The effectiveness of our optimizations encourages discussion between the OS community and storage vendors about future device interfaces for fast storage devices.


ACM Transactions on Storage | 2011

Request Bridging and Interleaving: Improving the Performance of Small Synchronous Updates under Seek-Optimizing Disk Subsystems

Dongin Shin; Youngjin Yu; Hyeong Seog Kim; Hyeonsang Eom; Heon Young Yeom

Write-through caching in modern disk drives enables the protection of data in the event of power failures as well as from certain disk errors when the write-back cache does not. Host system can achieve these benefits at the price of significant performance degradation, especially for small disk writes. We present new block-level techniques to address the performance problem of write-through caching disks. Our techniques are strongly motivated by some interesting results when the disk-level caching is turned off. By extending the conventional request merging, request bridging increases the request size and amortizes the inherent delays in the disk drive across more bytes of data. Like sector interleaving, request interleaving rearranges requests to prevent the disk head from missing the target sector position in close proximity, and thus reduces disk latency. We have evaluated our block-level approach using a variety of I/O workloads and shown that it increases disk I/O throughput by up to about 50%. For some real-world workloads, the disk performance is comparable or even superior to that of using the write-back disk cache. In practice, our simple yet effective solutions achieve better tradeoffs between data reliability and disk performance when applied to write-through caching disks.


challenges of large applications in distributed environments | 2005

A user-transparent recoverable file system for distributed computing environment

Hyeong Seog Kim; Heon Young Yeom

In a distributed computing environment, particularly grid, fault-tolerance is one of the core functionalities the system should provide. MPICH-GF is such a resilient system designed to resist external or internal failures, especially for message passing applications in the grid environment. But it does not stand the loss of a valuable resource: files. In a normal case, users open files and write data into them in an asynchronous manner, and checkpointing is initiated with no regard to the state of the context of the process. Therefore, the checkpointing system should automatically recognize the running process and protect the open files transparently. We have implemented a recoverable file system, named ReFS, which is incorporated into our fault-tolerant system MPICH-GF. ReFS is a versioning-like file system. ReFS provides middleware libraries with the system call interface to protect specific files at a given time. This prevents applications from processing their jobs with corrupted data and resulting in incorrect results in case of failures. We have focused not only on the reliability of the system but also on the reduction of inevitable overheads. This paper describes the design and implementation of ReFS and justifies the validity of the behavior of ReFS. We have developed ReFS on Linux, based on Ext2.


Ksii Transactions on Internet and Information Systems | 2010

ELiSyR: Efficient, Lightweight and Sybil-Resilient File Search in P2P Networks

Hyeong Seog Kim; Eunjin Jung; Heon Young Yeom

Peer-to-peer (P2P) networks consume the most bandwidth in the current Internet and file sharing accounts for the majority of the P2P traffic. Thus it is important for a P2P file sharing application to be efficient in bandwidth consumption. Bandwidth consumption as much as downloaded file sizes is inevitable, but those in file search and bad downloads, e.g. wrong, corrupted, or malicious file downloads, are overheads. In this paper, we target to reduce these overheads even in the presence of high volume of malicious users and their bad files. Sybil attacks are the example of such hostile environment. Sybil attacker creates a large number of identities (Sybil nodes) and unfairly influences the system. When a large portion of the system is subverted, either in terms of the number of users or the number of files shared in the system, the overheads due to the bad downloads rapidly increase. We propose ELiSyR, a file search protocol that can tolerate such a hostile environment. ELiSyR uses social networks for P2P file search and finds benign files in 71% of searches even when more than half of the users are malicious. Furthermore, ELiSyR provides similar success with less bandwidth than other general efforts against Sybil attacks. We compare our algorithm to SybilGuard, SybilLimit and EigenTrust in terms of bandwidth consumption and the likelihood of bad downloads. Our algorithm shows lower bandwidth consumption, similar chances of bad downloads and fairer distribution of computation loads than these general efforts. In return, our algorithm takes more rounds of search than them. However the time required for search is usually much less than the time required for downloads, so the delay in search is justifiable compared to the cost of bad downloads and subsequent re-search and downloads.


ubiquitous computing | 2012

Systematic approach of using power save mode for cloud data processing services

Hyeong Seog Kim; Dongin Shin; Youngjin Yu; Hyeonsang Eom; Heon Young Yeom

Energy efficiency is becoming a key issue for IT operators and data centres. To provide Power Save Mode (PSM) for data processing applications in such environments, power down method as a scaling down technique is an effective solution toward energy proportionality. The existing solutions for the PSM are inefficient in the overall energy management, taking the full replication approach. We propose an efficient replica redistribution algorithm, and our experimental results show that our system significantly reduces the network usage and the elapsed time. Our system leads to a slight increase in running time compared with the full replication approach.


cluster computing and the grid | 2008

A Task Pipelining Framework for e-Science Workflow Management Systems

Hyeong Seog Kim; In Soon Cho; Heon Young Yeom

Workflow manager is a useful tool that brings the power of computational grid resources to the desktop, and allow them to conveniently put together and run their own scientific workflows. In existing workflow systems, individual tasks wait for input to be available perform computation, and produce output. Behind this, workflow manager automates the data movement from the data generating task to the data consumption task. This process is referred as file staging. Generally, stage-in, process, and stage-out are serially executed and staging is treated by traditional work- flow systems as a trivial step. However, as the data size is exponentially increasing and more and more scientific workflows require multiple processing steps to obtain the desired output, we argue that the data movement will possess high portion of overall running time and staging will become a challenging step of scientific workflow systems. In this paper, we propose a task pipelining framework for various e-Science workflow systems. Our system is a flexible and efficient tool to help the workflow systems to overlap the execution of adjacent tasks by enabling the pipelining of the intermediate data transfer between the interconnected tasks.


Journal of Information Science and Engineering | 2011

Backup Metadata As Data: DPC-Tolerance to Commodity File System

Young Jin Yu; Dong In Shin; Hyeong Seog Kim; Hyeonsang Eom; Heon Young Yeom

Backup Metadata As Data (MAD) is a user-level solution that enables commodity file systems to replicate their critical metadata and to recover from disk pointer corruptions. More specifically, it extracts disk pointers from file system and saves them as user data. When some data blocks become inaccessible due to pointer corruptions, Backup MAD restores access paths to them either by copying the blocks to another file system or by directly updating on-disk structures of file system. The latter technique helps Backup MAD restore lost files faster than any other recovery solution because data blocks are not moved during restoration. Also, as the technique relies on disk pointers extracted from a consistent file system state, it can rescue up to 50% more files than a scan-based recovery tool that infers block dependencies from a corrupted partition. We demonstrate the effectiveness of our technique by two real implementations, MAD-NTFS and MAD-ext2. Backup MAD enhances dependability of file system by protecting disk pointers on behalf of file system.


international conference on principles of distributed systems | 2008

Load-Balanced and Sybil-Resilient File Search in P2P Networks

Hyeong Seog Kim; Eunjin Jung; Heon Young Yeom

We present a file search algorithm that is not only Sybil-resilient, but also load-balanced among users. We evaluate our algorithm in terms of bandwidth consumption and the likelihood of bad downloads. In both metrics, our algorithms show balanced overhead among users and small chances of bad download with low bandwidth consumption.


2011 International Green Computing Conference and Workshops | 2011

DASCA: Data Aware Scaling Down to provide power proportionality for distributed data processing frameworks

Hyeong Seog Kim; Dongin Shin; Youngjin Yu; Hyeonsang Eom; Heon Young Yeom

Distributed systems have led to the adoption of cloud computing concepts among countless enterprises. A large number of companies have already benefited from delegating IT services to cloud service providers. At the same time, the interest on energy efficiency has dramatically increased. Energy efficiency in large distributed systems is a big concern for system engineers. In addition, the proliferation of distributed data processing frameworks such as MapReduce have led to a vast amount of research and practices. In this paper, we are particularly interested in providing energy proportionality for MapReduce. To provide energy proportionality, we propose Data Aware Scaling Down (DASCA), a scaling down framework for MapReduce. There are two problems we must address in order to support scaling down for MapReduce. The first is to choose a proper set of nodes to suspend, which we call candidate set. The second is to minimize the replica redistribution which occurs during the initiation of power save mode. To address these problems, we use the data awareness of the MapReduce framework. To address the first problem, we provide two greedy algorithms which exploit the data awareness of MapReduce. To address the second problem, we propose locality aware replica redistribution to efficiently redistribute the lost replicas while preserving the availability of replicas and performance of distributed processing.


grid computing | 2005

Dynamic failure management for parallel applications on grids

Hyungsoo Jung; Dongin Shin; Hyeong Seog Kim; Hyuck Han; Inseon Lee; Heon Young Yeom

The computational grid, as it is today, is vulnerable to node failures and the probability of a node failure rapidly grows as the size of the grid increases. There have been several attempts to provide fault tolerance using checkpointing and message logging in conjunction with the MPI library. However, the Grid itself should be active in dealing with the failures. We propose a dynamic reconfigurable architecture where the applications can regroup in the face of a failure. The proposed architecture removes the single point of failure from the computational grids and provides flexibility in terms of grid configuration.

Collaboration


Dive into the Hyeong Seog Kim's collaboration.

Top Co-Authors

Avatar

Heon Young Yeom

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Hyeonsang Eom

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Dongin Shin

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Youngjin Yu

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Dong In Shin

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jae Woo Choi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Young Jin Yu

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eunsung Kim

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge