Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyunchan Park is active.

Publication


Featured researches published by Hyunchan Park.


international conference on consumer electronics | 2011

Performance enhancement of I/O scheduler for Solid State Devices

Seungyup Kang; Hyunchan Park; Chuck Yoo

Due to the benefits of Solid State Drive (SSD), it has been called a pivotal technology on data storage systems. But, current device level I/O schedulers are not optimized for SSD. In this paper, we suggest a new I/O scheduler, called STB, to exploit the performance potential of SSDs. Our STB scheduler categorizes I/O requests into two groups and sets timers on each request. We implement STB scheduler in Linux 2.6.30, and the set of benchmark programs shows that STB scheduler provides better bandwidth up to 30% with low response time of each request.


IEEE Transactions on Parallel and Distributed Systems | 2016

Storage SLA Guarantee with Novel SSD I/O Scheduler in Virtualized Data Centers

Hyunchan Park; Seehwan Yoo; Cheol Ho Hong; Chuck Yoo

Service level agreements (SLAs) for storage performance in virtualized systems are difficult to guarantee, because different consolidated virtual machines have their own performance requirements. Moreover, hard disk drives (HDDs) in virtualized systems are being replaced by solid-state drives (SSDs). SSDs have higher throughput and lower latency than HDDs; however, they pose new challenges in terms of SLAs. In this paper, we determine that existing I/O schedulers working with SSDs fail to guarantee SLAs among virtualmachines, and do not effectively utilize the high performance of SSDs. To address this issue, we propose the opportunistic I/O scheduler (OIOS), a novel I/O scheduler for SSDs. OIOS guarantees SLAs and fully utilizes the high performance of SSDs. To support realistic SLAs, OIOS provides diverse SLA support functions, including reservations, limitations, and proportional sharing. In addition, OIOS accepts SLAs that are specified in four measurement types: bandwidth, I/Os per second (IOPS), latency, and utilization. Experimental results show that OIOS increases the aggregated bandwidth of VMs by 80 percent compared to mClock, while achieving a similar level of fairness. In addition, we evaluate the proposed scheduler with realistic benchmarks, such as Filebench and the Yahoo CloudServing Benchmark. OIOS successfully guarantees the requirements of diverse SLAs with different metrics.


The Journal of Supercomputing | 2016

Synchronization support for parallel applications in virtualized clouds

Cheol Ho Hong; Young Pil Kim; Hyunchan Park; Chuck Yoo

Cloud computing platforms have become very attractive for parallel applications, thanks to the system virtualization technology that allows versatile and pliable computing environments. However, owing to the virtualization overhead, parallel applications can suffer from poor performance when executing synchronization operations. In this paper, we propose sc scheduling, which is a synchronization-conscious scheduling algorithm that can mitigate the existing virtualization overhead. For this purpose, the proposed scheduler understands the synchronization phases of each parallel application. Based on this comprehension, it then eliminates unnecessary CPU spinning of parallel threads and its incurred waste of valuable CPU time. In addition, it prevents their long blocking, which otherwise causes unfairness between concurrent virtual machines (VMs) and other VMs. We implemented these simple concepts and thoroughly evaluated them in a recent Xen hypervisor release. Our results demonstrate that our approach can significantly improve the speed of concurrent virtual machines compared to the original credit scheduler in Xen.


european conference on parallel processing | 2014

Performance prediction and evaluation of parallel applications in KVM, Xen, and VMware

Cheol Ho Hong; Beom Joon Kim; Young Pil Kim; Hyunchan Park; Chuck Yoo

Cloud computing platforms are considerably attractive for parallel applications that perform large-scale, computationally intensive tasks. These platforms can provide elastic computing resources to the parallel software owing to system virtualization technology. Almost every cloud service provider operates on a pay-per-use basis, and therefore, it is important to estimate the performance of parallel applications before deploying them. However, a comprehensive study that can predict the performance of parallel applications remains unexplored and is still a research topic. In this paper, we provide a theoretical performance model that can predict the performance of parallel applications in different virtual machine scheduling policies and evaluate the model in representative hypervisors including KVM, Xen, and VMware. Through this analysis and evaluation, we show that our performance prediction model is accurate and reliable.


Scientific Programming | 2016

ANCS: Achieving QoS through Dynamic Allocation of Network Resources in Virtualized Clouds

Cheol Ho Hong; K. W. Lee; Hyunchan Park; Chuck Yoo

To meet the various requirements of cloud computing users, research on guaranteeing Quality of Service (QoS) is gaining widespread attention in the field of cloud computing. However, as cloud computing platforms adopt virtualization as an enabling technology, it becomes challenging to distribute system resources to each user according to the diverse requirements. Although ample research has been conducted in order to meet QoS requirements, the proposed solutions lack simultaneous support for multiple policies, degrade the aggregated throughput of network resources, and incur CPU overhead. In this paper, we propose a new mechanism, called ANCS (Advanced Network Credit Scheduler), to guarantee QoS through dynamic allocation of network resources in virtualization. To meet the various network demands of cloud users, ANCS aims to concurrently provide multiple performance policies; these include weight-based proportional sharing, minimum bandwidth reservation, and maximum bandwidth limitation. In addition, ANCS develops an efficient work-conserving scheduling method for maximizing network resource utilization. Finally, ANCS can achieve low CPU overhead via its lightweight design, which is important for practical deployment.


international conference on consumer electronics | 2014

Seamless streaming with intelligent rate determinate algorithm in content centric networks

Jaehwan Kwon; Junghwan Lee; Hyunchan Park; Chuck Yoo

In this paper, we propose the intelligent Rate Determinate Algorithm (iRDA) for the content centric network (CCN) to provide a seamless video streaming service under fluctuating network bandwidth. iRDA has a method to restart the bit-rate selection of current segment deadline when the threshold value. Compared to the standard, dynamic adaptive streaming over HTTP (DASH), iRDA reduces the amount of freezes time.


international conference on consumer electronics | 2015

Improving I/O performance of Xen hypervisors for solid state drives

Hyunchan Park; Cheol Ho Hong; Younghyun Kim; Seehwan Yoo; Chuck Yoo

Solid state drives (SSDs) are becoming increasingly popular in computing environment that employ virtualization. The Xen hypervisor is the most popular hypervisor for virtualization. In this paper, we discuss the performance of SSDs on the Xen hypervisor, show that the Xen I/O model causes performance degradation in SSDs, and analyze the reasons for the degradation. We then propose a new I/O processing method, clustering of fragmented I/O requests, which improves the I/O performance of SSDs. The evaluations indicate that our proposed method improves the I/O performance by as much as 7%.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2015

SSD-Tailor: Automated Customization System for Solid-State Drives

Hyunchan Park; Hanchan Jo; Cheol Ho Hong; Young Pil Kim; Seehwan Yoo; Chuck Yoo

Enterprise servers require customized solid-state drives (SSDs) to satisfy their specialized I/O performance and reliability requirements. For effective use of SSDs for enterprise purposes, SSDs must be designed considering requirements such as those related to performance, lifetime, and cost constraints. However, SSDs have numerous hardware and software design options, such as flash memory types and block allocation methods, which have not been well analyzed yet, but on which the SSD performance depends. Furthermore, there is no methodology for determining the optimal design for a particular I/O workload. This paper proposes SSD-Tailor, a customization tool for SSDs. SSD-Tailor determines a near-optimal set of design options for a given workload. SSD designers can use SSD-Tailor to customize SSDs in the early design stage to meet the customer requirements. We evaluate SSD-Tailor with nine I/O workload traces collected from real-world enterprise servers. We observe that SSD-Tailor finds near-optimal SSD designs for these workloads by exploring only about 1% of the entire set of design candidates. We also show that the near-optimal designs increase the average I/O operations per second by up to 17% and decrease the average response time by up to 163% as compared to an SSD with a general design.


international conference on consumer electronics | 2014

Tailor-made SSD using a genetic algorithm

Hanchan Jo; Hyunchan Park; Chuck Yoo

We present a design system for SSDs that utilizes a genetic algorithm. To maximize the performance in a specific system with a workload trace, the technique efficiently determines customized architectural parameters for the SSD. Compared to general SSDs, our proposed scheme reduces the average response time by up to 30%.


Journal of Systems and Software | 2014

O1FS: Flash file system with O(1) crash recovery time

Hyunchan Park; Sam H. Noh; Chuck Yoo

Abstract The crash recovery time of NAND flash file systems increases with flash memory capacity. Crash recovery usually takes several minutes for a gigabyte of flash memory and becomes a serious problem for mobile devices. To address this problem, we propose a new flash file system, O1FS. A key concept of our system is that a small number of blocks are modified exclusively until we change the blocks explicitly. To recover from crashes, O1FS only accesses the most recently modified blocks rather than the entire flash memory. Therefore, the crash recovery time is bounded by the size of the blocks. We develop mathematical models of crash recovery techniques and prove that the time complexity of O1FS is O (1), whereas that of other methods is proportional to the number of blocks in the flash memory. Our evaluation shows that the crash recovery of O1FS is about 18.5 times faster than that of a state-of-the-art method.

Collaboration


Dive into the Hyunchan Park's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge