Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chanik Park is active.

Publication


Featured researches published by Chanik Park.


international conference on parallel and distributed systems | 2001

An optimal scheduling algorithm based on task duplication

Chanik Park; Tae-Young Choe

The task scheduling problem in distributed memory machines is to allocate the tasks of an application into processors in order to minimize the total execution time. This is known as an NP-complete problem. Under the condition where the communication time is relatively shorter than the computation time for every task, the task duplication based scheduling (TDS) algorithm proposed by Darbha and Agrawal (1998) generates an optimal schedule. We propose an extended TDS algorithm whose optimality condition is less restricted than the TDS algorithm. Given a DAG where the condition is met, our algorithm has the time complexity of O(|V|/sup 2/d/sup 2/) where |V| represents the number of tasks, and d represents the maximum degree of tasks.


international conference on parallel and distributed systems | 2006

Real-time scheduling in heterogeneous dual-core architectures

Kwangsik Kim; Dohun Kim; Chanik Park

With high computational application, embedded systems are becoming more complex. To achieve high performance in the midst of increased complexity, dual-core SoC (system-on-chip) is used. Of many dual-core architectures, a general purpose CPU and DSP are most widely used. But only a few scheduling policies for this heterogeneous architectures exist to guarantee real-time character. This paper discusses scheduling policy for heterogeneous dual-core architectures. We explore the problem of previous scheduling policy (Gai et al., 2002) based on DPCP (distributed priority ceiling protocol) (Rajkumar et al., 1988; Saewong et al., 1999; Sha et al., 1990) and provide a solution of strict schedulability bound model


pacific rim international symposium on dependable computing | 2002

An adaptive high-low water mark destage algorithm for cached RAID5

Young Jin Nam; Chanik Park

The high-low water mark destage (HLWM) algorithm is widely used to enable a cached RAID5 to flush dirty data from its write cache to disks. It activates and deactivates a destaging process based on two time-invariant thresholds which are determined by cache occupancy levels. However the opportunity exists to improve I/O throughput by adaptively changing the thresholds. This paper proposes an adaptive HLWM algorithm which dynamically changes its thresholds according to a varying I/O workload. Two thresholds are defined as the multiplication of changing rates of the cache occupancy level and the time required to fill and empty the cache. Performance evaluations with a cached RAID5 simulator reveal that the proposed algorithm outperforms the HLWM algorithm in terms of read response time, write cache hit ratio, and disk utilization.


ieee international conference on cloud computing technology and science | 2012

DFCloud: A TPM-based secure data access control method of cloud storage in mobile devices

Jaebok Shin; Yungu Kim; Wooram Park; Chanik Park

Using the cloud storage services, users can access their data in any time, at any place, even with any computing device including mobile devices. Although these properties provide flexibility and scalability in handling data, security issues should be handled especially when mobile devices try to access data stored in cloud storage. Currently, a typical cloud storage service, Dropbox, offers server-side data encryption for security purpose. However, we think such method is not secure enough because all the encryption keys are managed by software and there is no attestation on the client software integrity. Moreover, a simple user identification based on user ID and Password is also easy to be compromised. Data sharing which is critical in enterprise environment is significantly restricted because it is not easy to share encryption key among users. In this paper, we propose DFCloud, a secure data access control method of cloud storage services to handle these problems found in the typical cloud storage service Dropbox. DFCloud relies on Trusted Platform Module (TPM) [1] to manage all the encryption keys and define a key sharing protocol among legal users. We assume that each client is mobile device using ARM TrustZone [2] technology. The DFCloud server prototype is implemented using ARM Fastmodel 7.1 and Open Virtualization software stack for ARM TrustZone. For DFCloud client, TPM functions are developed in the secure domain of ARM TrustZone because most ARM-based mobile devices are not equipped with TPM chip. The DFCloud framework defines TPM-based secure channel setup, TPM-based key management, remote client attestation, and a secure key share protocol across multiple users/devices. It is shown that our concept works correctly through a prototype implementation.


international conference on parallel processing | 2002

A task duplication based scheduling algorithm with optimality condition in heterogeneous systems

Tae-Young Choe; Chanik Park

The task scheduling problem is NP-hard in heterogeneous systems. We propose a task scheduling algorithm based on task duplication with an optimality condition to determine whether or not the resulting schedule has the shortest schedule length. The optimality condition is that, given any join task, the completion time of a parent task is longer than the maximum message arrival times from the other parent tasks. An illustrative example is given to show how our algorithm differs from existing algorithms.


international conference on parallel and distributed systems | 2001

Design and implementation of a fibre channel network driver for SAN-attached RAID controllers

Jae-Chang Namgoong; Chanik Park

Fibre channel SAN (storage area network) is considered to be a promising solution to address storage problems caused by the sheer volume of data and their management. To adopt this new storage environment, we design and implement a high performance fibre channel network driver for SAN-attached RAID controllers in a real-time operating system. We describe the architecture of the fibre channel driver which consists of two modes; a target mode and an initiator mode. An exception handling mechanism for enduring a disk failure is also given. Lastly, we measured the performance of the fibre channel driver. Testing results reveal a moderately successful performance of the fibre channel driver.


Future Generation Computer Systems | 2006

Design and evaluation of an efficient proportional-share disk scheduling algorithm

Young Jin Nam; Chanik Park

Proportional-share algorithms are designed to allocate an available resource, such as a network, processor, or disk, for a set of competing applications in proportion to the resource weight allotted to each. While a myriad of proportional-share algorithms were made for network and processor resources, little research work has been conducted on disk resources, which exhibit non-linear performance characteristics attributed to disk head movements. This paper proposes a new proportional-share disk-scheduling algorithm, which accounts for overhead caused by disk head movements and QoS guarantees in an integrated manner. Performance evaluations via simulations reveal that the proposed algorithm improves I/O throughput by 11-19% with only 1-2% QoS deterioration.


international conference on parallel processing | 2002

Enhancing write I/O performance of disk array RM2 tolerating double disk failures

Young Jin Nam; Dae Woong Kim; Tae-Young Choe; Chanik Park

With a large number of internal disks and the rapid growth of disk capacity, storage systems become more susceptible to double disk failures. Thus, the need for such reliable storage systems as RAID6 is expected to gain in importance. However RAID6 architectures such as RM2, P+Q, EVEN-ODD, and DATUM traditionally suffer from a low write I/O performance caused by updating two distinctive parity data associated with user data. To overcome such a low write I/O performance, we propose an enhanced RM2 architecture which combines RM2, one of the well-known RAID6 architectures, with a Lazy Parity Update (LPU) technique. Extensive performance evaluations reveal that the write I/O performance of the proposed architecture is about two times higher than that of RM2 under various I/O workloads with little degradation in reliability.


international conference on embedded software and systems | 2007

Fast Initialization and Memory Management Techniques for Log-Based Flash Memory File Systems

Junkil Ryu; Chanik Park

Flash memorys adoption in the mobile devices is increasing for various multimedia services such as audios, videos, and games. The traditional research issues such as out-place update, garbage collection, and wear-leveling are important, the fast initialization and response time issues of flash memory file system are becoming much more important than ever because flash memory capacity is rapidly increasing. In this paper, we propose a fast initialization technique and an efficient memory management technique for fast response time in log-based flash memory file systems. Our prototype is implemented based on a well-known log-based flash memory file system YAFFS2 and the performance tests were conducted by comparing our prototype with YAFFS2. The experimental results show that the proposed initialization technique reduced the initialization time of the log-based flash memory file system regardless of unmounting the file system properly. Moreover our prototype outperforms YAFFS2 in the read I/O operations and the forward/backward seek I/O operations by way of our proposed memory management technique. This technique is also able to be used to control the memory size required for address mapping in flash memory file systems.


international conference on computational science | 2004

A k-way Graph Partitioning Algorithm Based on Clustering by Eigenvector

Tae-Young Choe; Chanik Park

The recursive spectral bisection for the k-way graph partition has been underestimated because it tries to balance the bipartition strictly. However, by loosening the balancing constraint, the spectral bisection can identify clusters efficiently. We propose a k-way graph partitioning algorithm based on clustering using recursive spectral bisection. After a graph is divided into a partition, the partition is adjusted in order to meet the balancing constraint. Experimental results show that the clustering based k-way partitioning generates partitions with 83.8~ 108.4% cutsets compared to the strict recursive spectral bisections or multi-level partitions.

Collaboration


Dive into the Chanik Park's collaboration.

Top Co-Authors

Avatar

Young Jin Nam

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sejin Park

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dae Woong Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dohun Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tae-Young Choe

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Woojoong Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Baegjae Sung

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Junkil Ryu

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wooram Park

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guangyong Piao

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge