Kyungbaek Kim
Chonnam National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyungbaek Kim.
global communications conference | 2010
Kyungbaek Kim; Nalini Venkatasubramanian
This paper addresses reliability of data dissemination applications when there are severe disruptions to the underlying physical infrastructure. Such massive simultaneous physical failures can happen during the geographical events such as natural disasters (earthquakes, floods, tornados) or sudden power outages - infrastructure failures in these cases are geographically correlated. In particular, we focus on overlay based data dissemination mechanisms and explore their ability to tolerate such large geographically correlated failures. Due to the tight correlation between multiple overlay links and a single physical link, a few physical failures may affect lots of overlay links. To enable reliable dissemination under such conditions, we propose overlay network construction methods that incorporate proximity-aware neighbor selection methods to improve the performance of the overlay data dissemination, to the extent possible, in terms of reliability and latency. In this approach, the overlay nodes select neighbors which are most likely distinct in presence of a geographical failure; we show how an overlay structure constructed using our proximity-aware neighbor selection techniques can disseminate data to over 80% of reachable end clients without any significant additional latency under various geographical failure conditions.
distributed event-based systems | 2013
Ye Zhao; Kyungbaek Kim; Nalini Venkatasubramanian
Emerging societal scale notification applications call for a system that is able to efficiently support simple, yet changing subscriptions for a very large number of users. In this paper we propose DYNATOPS, a dynamic topic-based pub/sub architecture that provides efficient scalable societal scale event notifications for dynamic subscriptions via distributed broker networks. In DYNATOPS, users are moderately repositioned on brokers and brokers are moderately repositioned on the overlay structure for efficient event notifications, to adapt to the publications and subscription dynamics. In contrast to existing self-organized techniques, the broker network reconfiguration in DYNATOPS is executed in a planned manner utilizing a cost-driven reconfiguration process. With extensive experiments, we observe that under highly dynamic subscriptions DYNATOPS can still maintain an efficient dissemination structure that provides 30% less notification delay and overhead in general, and a reconfiguration cost reduction of 80% as compared to other state-of-the-art systems.
international conference on parallel and distributed systems | 2001
Kyungbaek Kim; Daeyeon Park
With the recent explosion in usage of the World Wide Web, the problem of caching Web objects has gained considerable importance. The performance of these Web caches is highly affected by the replacement algorithm. Today, many replacement algorithms have been proposed for Web caching and these algorithms use the on-line fashion parameters. Recent studies suggest that the correlation between the on-line fashion parameters and the object popularity in the proxy cache are weakening due to the efficient client caches. We suggest a new algorithm, called Least Popularity Per Byte Replacement (LPPB-R). We use the popularity value as the long-term measurements of request frequency to make up for the weak point of the previous algorithms in the proxy cache and vary the popularity value by changing the impact factor easily to adjust the peformance to needs of the proxy cache. We examine the performance of this and other replacement algorithms via trace driven simulation.
IEICE Transactions on Information and Systems | 2007
Kyungbaek Kim; Daeyeon Park
DHT based p2p systems appear to provide scalable storage services with idle resource from many unreliable clients. If a DHT is used in storage intensive applications where data loss must be minimized, quick replication is especially important to replace lost redundancy on other nodes in reaction to failures. To achieve this easily, a simple replication method directly uses a consistent set, such as a leaf set and a successor list. However, this set is tightly coupled to the current state of nodes and the traffic needed to support this replication can be high and bursty under churn. This paper explores efficient replication methods that only glimpse a consistent set to select a new replica. Replicas are loosely coupled to a consistent set and we can eliminate the compulsory replication under churn. Because of a complication of the new replication methods, the careful data management is needed under churn for the correct and efficient data lookup. Results from a simulation study suggest that our methods can reduce network traffic enormously for high data durability.
international conference on communications | 2003
Kyungbaek Kim; Woo Jin Kim; Daeyeon Park
Many cooperated web cache systems and protocols have been proposed. These systems, however, require expensive resources, such as external bandwidth and proxy cpu or storage, while inducing hefty administrative costs to achieve adequate client population growth. Moreover, a scalability problem in the cache server management still exists.
modeling, analysis, and simulation on computer and telecommunication systems | 2002
Woo Hyun Ahn; Kyungbaek Kim; Yongjin Choi; Daeyeon Park
Small file accesses are still limited by disk head movement on modern disk drives with the high disk bandwidth. Small file performance can be improved by grouping and clustering, each of which places multiple files in a directory and places blocks of the same file on disks contiguously. These schemes make it possible for file systems to use large data transfers in accessing small files, reducing disk accesses. However, as file systems become aged, disks become too fragmented to support the grouping and clustering of small files. This fragmentation makes it difficult for file systems to take advantage of large data transfers, increasing disk I/Os. To offer a solution to this problem, we describe a de-fragmented file system (DFS). By using data cached in memory, DFS relocates and clusters data blocks of small fragmented files in a dynamic manner. Besides, DFS clusters related small files in the same directory, at contiguous disk locations. Measurements of DFS implementation show that the techniques alleviate file fragmentation significantly and, in particular performance for small file reads exceeds that of a traditional file system by 78%.
communication systems and networks | 2012
Michael Sirivianos; Kyungbaek Kim; Jian Wei Gan; Xiaowei Yang
Anonymity is one of the main virtues of the Internet, as it protects privacy and enables users to express opinions more freely. However, anonymity hinders the assessment of the veracity of assertions that online users make about their identity attributes, such as age or profession. We propose FaceTrust, a system that uses online social networks to provide lightweight identity credentials while preserving a users anonymity. Face-Trust employs a “game with a purpose” design to elicit the opinions of the friends of a user about the users self-claimed identity attributes, and uses attack-resistant trust inference to assign veracity scores to identity attribute assertions. FaceTrust provides credentials, which a user can use to corroborate his assertions. We evaluate our proposal using a live Facebook deployment and simulations on a crawled social graph. The results show that our veracity scores strongly correlate with the ground truth, even when a large fraction of the social network users is dishonest and employs the Sybil attack.
international conference on networking | 2008
Kyungbaek Kim
Many P2P based wide-area storage systems are appeared to provide scalable storage services with idle resource from many unreliable clients. To minimize the data loss, quick replication is important to replace lost redundancy on other nodes in reaction to failures. The popular approach is the availability based replication which uses the individual node availability. However, with the high churn some replicas leave or fail within similar time. As a result, it requires bursty data traffic and sometimes it loses data. This paper explores the time-related replication which uses the information of the session time to prevent the bursty failures. It tries to get the primary replica which has enough time to replace the lost redundancy. Moreover, the sparse replicas spread on the timeline and the number of overlapped replicas lessens as best it can. Results from a simulation study suggest that the time-related replication keep the high data availability with less data copy traffic.
symposium on reliable distributed systems | 2012
Kyungbaek Kim; Ye Zhao; Nalini Venkatasubramanian
The eventual goal of any notification system is to deliver appropriate messages to all relevant recipients with very high reliability in a timely manner. In particular, we focus on notification in extreme situations (e.g. disasters) where geographically correlated failures hinder the ability to reach recipients inside the corresponding failed region. In this paper, we present GSFord, a reliable geo-social notification system that is aware of (a) the geographies in which the message needs to be disseminated and (b) the social network characteristics of the intended recipient, in order to maximize/increase the coverage and reliability. GSFord builds robust geo-aware P2P overlays to provide efficient location-based message delivery and reliable storage of geo-social information of recipients. When an event occurs, GSFord is able to efficiently deliver the message to recipients who are either (a) located in the event area or (b) socially correlated to the event (e.g. relatives/friends of those who are impacted by an event). Furthermore, GSFord leverages the geo-social information to trigger a social diffusion process, which operates through out-of band channels such as phone calls and human contacts, in order to reach recipients which are isolated in the failed region. Through extensive evaluations, we show that GSFord is reliable, the social diffusion process enhanced by GSFord reaches up to 99.9\% of desired recipients even under massive geographically correlated regional failures. We also show that GSFord is efficient even under skewed distribution of user populations.
international conference on information networking | 2015
Hiep Tuan Nguyen Tri; Kyungbaek Kim
Software Defined Network (SDN) empowers network operators with more flexibility to program their networks. In SDN, dummy switches on the data plane dynamically forward packets based on the rules which are managed by a centralized controller. To apply the rules, switches need to write the rules in its flow table. However, because the size of the flow table is limited, a scalability problem can be an issue. Also, this scalability problem becomes a security issue related to Distributed Denial of Service (DDoS) attacks, especially the resource attack which consumes all flow tables of switches. In this paper, we explore the impact of the resource attack to a SDN network. The resource attack is emulated on the SDN with mininet and OpenDaylight, and the effect of resource attack to the SDN is deeply analyzed in the aspects of delay and bandwidth. Through the evaluation, we highlight the importance of managing the flow tables with the awareness of their size limitation. Also, we discuss solutions which can address the resource attack and their challenges.