Kyoungho An
Vanderbilt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyoungho An.
Journal of Systems Architecture | 2014
Kyoungho An; Shashank Shekhar; Faruk Caglar; Aniruddha S. Gokhale; Shivakumar Sastry
Applications are increasingly being deployed in the cloud due to benefits stemming from economy of scale, scalability, flexibility and utility-based pricing model. Although most cloud-based applications have hitherto been enterprise-style, there is an emerging need for hosting real-time streaming applications in the cloud that demand both high availability and low latency. Contemporary cloud computing research has seldom focused on solutions that provide both high availability and real-time assurance to these applications in a way that also optimizes resource consumption in data centers, which is a key consideration for cloud providers. This paper makes three contributions to address this dual challenge. First, it describes an architecture for a fault-tolerant framework that can be used to automatically deploy replicas of virtual machines in data centers in a way that optimizes resources while assuring availability and responsiveness. Second, it describes the design of a pluggable framework within the fault-tolerant architecture that enables plugging in different placement algorithms for VM replica deployment. Third, it illustrates the design of a framework for real-time dissemination of resource utilization information using a real-time publish/subscribe framework, which is required by the replica selection and placement framework. Experimental results using a case study that involves a specific replica placement algorithm are presented to evaluate the effectiveness of our architecture.
Proceedings of the Workshop on Secure and Dependable Middleware for Cloud Monitoring and Management | 2012
Kyoungho An; Subhav Pradhan; Faruk Caglar; Aniruddha S. Gokhale
Providing scalable and QoS-enabled (i.e., real-time and reliable) monitoring of resources (both virtual and physical) in the cloud is essential to supporting application QoS properties in the cloud as well as identifying security threats. Existing approaches to resource monitoring in the cloud are based on web interfaces, such as RESTful APIs and SOAP, which cannot provide real-time information efficiently and scalably because of a lack of support for fine-grained and differentiated monitoring capabilities. Moreover, their implementation overhead results in a distinct loss in performance, incurs latency jitter, and degrades reliable delivery of time-sensitive information. To address these challenges this paper presents a novel lighter weight and scalable resource monitoring and dissemination solution based on the publish/subscribe (pub/sub) paradigm. Our solution called SQRT-C leverages the OMG Data Distribution Service (DDS) real-time pub/sub middleware, and uses effective software engineering principles to make it usable with multiple cloud platforms. Preliminary empirical results comparing SQRT-C with contemporary web-based resource usage monitoring services reveals that SQRT-C is significantly better than the conventional approaches in terms of latency, jitter and scalability.
international conference on cyber-physical systems | 2011
Kyoungho An; Adam Trewyn; Aniruddha S. Gokhale; Shivakumar Sastry
Reconfigurable conveyors are increasingly being adopted in multiple industrial sectors for their immense flexibility in adapting to new products and product lines. Before modifying the layout of the conveyor system for the new product line, however, engineers and layout planners must be able to answer many questions about the system, such as maximum sustainable rate of flow of goods, prioritization among goods, and tolerances of failures. Any analysis capability that provides answers to these questions must account for both the physical and cyber artifacts of the reconfigurable system all at once. Moreover, the same system should enable the stakeholders to seamlessly change the layouts and be able to analyze the pros and cons of the layouts. This paper addresses these challenges by presenting a model-driven analysis tool that provides three important capabilities. First, a domain-specific modeling language provides the stakeholders with intuitive artifacts to model conveyor layouts. Second, an analysis engine embedded within the model-driven tool provides an accurate simulation of the modeled conveyor system accounting for both the physical and cyber issues. Third, generative capabilities within the tool help to automate the analysis process. The merits of our model-driven analysis tool are evaluated in the context of an example conveyor topology.
distributed event-based systems | 2015
Shweta Khare; Kyoungho An; Aniruddha S. Gokhale; Sumant Tambe; Ashish Meena
The Internet of Things (IoT) paradigm has given rise to a new class of applications wherein complex data analytics must be performed in real-time on large volumes of fast-moving and heterogeneous sensor-generated data. Such data streams are often unbounded and must be processed in a distributed and parallel manner to ensure timely processing and delivery to interested subscribers. Dataflow architectures based on event-based design have served well in such applications because events support asynchrony, loose coupling, and helps build resilient, responsive and scalable applications. However, a unified programming model for event processing and distribution that can naturally compose the processing stages in a dataflow while exploiting the inherent parallelism available in the environment and computation is still lacking. To that end, we investigate the benefits of blending Reactive Programming with data distribution frameworks for building distributed, reactive, and high-performance stream-processing applications. Specifically, we present insights from our study integrating and evaluating Microsoft .NET Reactive Extensions (Rx) with OMG Data Distribution Service (DDS), which is a standards-based publish/subscribe middleware suitable for demanding industrial IoT applications. Several key insights from both qualitative and quantitative evaluation of our approach are presented.
distributed event-based systems | 2014
Kyoungho An; Aniruddha S. Gokhale; Douglas C. Schmidt; Sumant Tambe; Paul Pazandak; Gerardo Pardo-Castellote
The OMG Data Distribution Service (DDS) has been deployed in many mission-critical systems and increasingly in Internet of Things (IoT) applications since it supports a loosely-coupled, data-centric publish/subscribe paradigm with a rich set of quality-of-service (QoS) policies. Effective data communication between publishers and subscribers requires dynamic and reliable discovery of publisher/-subscriber endpoints in the system, which DDS currently supports via a standardized approach called the Simple Discovery Protocol (SDP). For large-scale systems, however, SDP scales poorly since the discovery completion time grows as the number of applications and endpoints increases. To scale to much larger systems, a more efficient discovery protocol is required. This paper makes three contributions to overcoming the current limitations with DDS SDP. First, it describes the Content-based Filtering Discovery Protocol (CFDP), which is our new endpoint discovery mechanism that employs content-based filtering to conserve computing, memory and network resources used in the DDS discovery process. Second, it describes the design of a CFDP prototype implemented in a popular DDS implementation. Third, it analyzes the results of empirical studies conducted in a testbed we developed to evaluate the performance and resource usage of our CFDP approach compared with SDP.
acm conference on systems programming languages and applications software for humanity | 2013
Faruk Caglar; Kyoungho An; Shashank Shekhar; Aniruddha S. Gokhale
There is a growing trend towards migrating applications and services to the cloud. This trend has led to the emergence of different cloud service providers (CSPs), in turn leading to different cost models offered by these CSPs to lease their resources, variabilities in the granularity and specification of resources provided, and heterogeneous APIs offered by the CSPs to the users to program resource requests and deployment for their cloud-hosted services. These challenges make it hard for customers of the cloud to seamlessly transition their services to the cloud or migrate between different CSPs. To address these challenges, this paper presents a solution based on model-driven engineering (MDE). Specifically, we describe the design of the domain-specific modeling languages in our MDE framework and the associated generative mechanisms that address the challenges related to estimating performance and cost to host the services in the cloud, automated deployment and resource management.
international middleware conference | 2015
Kyoungho An; Aniruddha S. Gokhale; Sumant Tambe; Takayuki Kuroda
Distributed systems found in application domains, such as smart transportation and smart grids, inherently require dissemination of large amount of data over wide area networks (WAN). A large portion of this data is analyzed and used to manage the overall health and safety of these distributed systems. The data-centric, publish/subscribe (pub/sub) paradigm is an attractive choice to address these needs because it provides scalable and loosely coupled data communications. However, existing data-centric pub/sub mechanisms supporting quality of service (QoS) tend to operate effectively only within local area networks. Likewise broker-based solutions that operate at WAN-scale seldom provide mechanisms to coordinate among themselves for discovery and dissemination of information, and cannot handle both the heterogeneity of pub/sub endpoints as well as the significant churn in endpoints that is common in WAN-scale systems. To address these limitations, this paper presents PubSubCoord, which is a cloud-based coordination and discovery service for WAN-scale pub/sub systems. PubSubCoord, which builds upon the ZooKeeper coordination primitives, realizes a WAN-scale, adaptive, and low-latency endpoint discovery and data dissemination architecture by (a) balancing the load using elastic cloud resources, (b) clustering brokers by topics for affinity, and (c) minimizing the number of data delivery hops in the pub/sub overlay.
distributed event-based systems | 2017
Kyoungho An; Shweta Khare; Aniruddha S. Gokhale; Akram Hakiri
Industrial Internet of Things (IIoT) applications are mission-critical, which require a scalable data sharing and dissemination platform that supports quality of service (QoS) properties such as timeliness, resilience, and security. Although the Object Management Group (OMG)s Data Distribution Service (DDS), which is a data-centric, peer-to-peer publish/subscribe standard supporting multiple QoS properties, is well-suited to meet the requirements of IIoT applications, its design and current technology limitations constrains its use to local area networks only. Moreover, although broker-based bridging services exist to inter-connect isolated DDS networks, these solutions lack autonomous and dynamic coordination and discovery capabilities that are needed to bridge multiple, isolated networks on demand. To address these limitations, and enable a practical and readily deployable solution for IIoT, this paper presents and empirically validates PubSubCoord, which is an autonomous, coordination and discovery service for DDS endpoints operating over wide area networks.
distributed event-based systems | 2014
Kyoungho An; Aniruddha S. Gokhale
The OMG Data Distribution Service (DDS), which is a standard specification for data-centric publish/subscribe communications, has shown promise for use in internet of things (IoT) applications because of its loosely coupled and scalable nature, and support for multiple QoS properties, such as reliable and real-time message delivery in dynamic environments. However, the current OMG DDS specification does not define coordination and discovery services for DDS message brokers, which are used in wide area network deployments of DDS. This paper describes preliminary research on a cloud-enabled coordination service for DDS message brokers, PubSubCoord, to overcome these limitations. Our approach provides a novel solution that brings together (a) ZooKeeper, which is used for the distributed coordination logic between message brokers, (b) DDS Routing Service, which is used to bridge DDS endpoints connected to different networks, and (c) BlueDove, which is used to provide a single-hop message delivery between brokers. Our design can support publishers and subscribers that dynamically join and leave their subnetworks.
acm conference on systems programming languages and applications software for humanity | 2013
Kyoungho An; Takayuki Kuroda; Aniruddha S. Gokhale; Sumant Tambe; Andrea Sorbini
The Object Management Groups (OMG) Data Distribution Service (DDS) provides many configurable policies which determine end-to-end quality of service (QoS) delivered to the applications. It is challenging, however, to predict the applications performance in terms of latencies, throughput, and resource usage because diverse combinations of QoS configurations influence QoS of applications in different ways. To overcome this problem, design-time formal methods have been applied with mixed success, but a lack of sufficient accuracy in prediction, tool support, and understanding of formalism has prevented wider adoption of the formal techniques. A promising approach to address this challenge is to emulate application behavior and gather data on the QoS parameters of interest by experimentation. To realize this approach, we have developed a middleware framework that uses model-driven generative mechanisms to automate performance testing of a large number of DDS QoS configuration combinations that can be deployed and tested on a cloud platform.