Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinho Hwang is active.

Publication


Featured researches published by Jinho Hwang.


Proceedings of the Middleware Industry Track on | 2014

Improving readiness for enterprise migration to the cloud

Jill Jermyn; Jinho Hwang; Kun Bai; Maja Vukovic; Nikos Anerousis; Salvatore J. Stolfo

Enterprises are increasingly moving their IT infrastructures to the Cloud, driven by the promise of low-cost access to ready-to-use, elastic resources. Given the heterogeneous and dynamic nature of enterprise IT environments, a rapid and accurate discovery of complex infrastructure dependencies at the application, middleware, and network level is key to a successful migration to the Cloud. Existing migration approaches typically replicate source resources and configurations on the target site, making it challenging to optimize the resource usage (for reduced cost with same or better performance) or cloud-fit configuration (no misconfiguration) after migration. The responsibility of reconfiguring the target environment after migration is often left to the users, who, as a result, fail to reap the benefits of reduced cost and improved performance in the Cloud. In this paper we propose a method that automatically computes optimized target resources and identifies configurations given discovered source properties and dependencies of machines, while prioritizing performance in the target environment. From our analysis, we could reduce service costs by 60.1%, and found four types of misconfigurations from real enterprise datasets, affecting up to 81.8% of a data centers servers.


integrated network management | 2015

Enterprise-scale cloud migration orchestrator

Jinho Hwang; Yun-Wu Huang; Maja Vukovic; Nikos Anerousis

With the promise of low-cost access to flexible and elastic resources, enterprises are increasingly migrating their existing workloads into the Cloud. Yet, the heterogeneity of the workloads and existing configuration of legacy IT infrastructure make it challenging to enable a one-click, seamless migration process. There are multiple tools available for migrating servers based on their existing configurations and multiple ways of dealing with data synchronization (post migration). In this paper, we present a Cloud Migration Orchestrator (CMO), based on business process management (BPM) approach to provide a systematic framework to automate and coordinate migration activities. CMO coordinates the process of migration, starting from discovery, provisioning, network configuration, execution of migration, cutover and validation. CMO integrates multiple migration technologies, to support different migration scenarios. We present and discuss our results from a preliminary deployment of CMO to migrate 25 VMware instances and discuss how this approach improves the effectiveness of migration, and seamlessly coordinates activities required to be executed.


conference on network and service management | 2014

Software defined enterprise passive optical network

Ahmed Amokrane; Jinho Hwang; Jin Xiao; Nikos Anerousis

In the last few years, changing infrastructure and business requirements are forcing enterprises to rethink their networks. Enterprises look to passive optical networks (PON) for increased network efficiency, flexibility, and cost reduction. At the same time, the emergence of Cloud and mobile in enterprise networks calls for dynamic network control and management following a centralized and software-defined paradigm. In this context, we propose a software-defined edge network (SDEN) design that operates on top of PON. SDEN leverages PON benefits while overcoming its lack of dynamic control. This paper is a work-in-progress focusing on enabling key flow control functions over PON: dynamic traffic steering, service dimensioning and realtime re-dimensioning. We also discuss how SDEN edge network can integrate with core SDN solutions to achieve end-to-end manageability. Through case experiment studies conducted on a live PON testbed deployment, we show the practical benefits and potentials that SDEN can offer to enterprise networks redesign.


integrated network management | 2015

Dynamic capacity management and traffic steering in enterprise passive optical networks

Ahmed Amokrane; Jin Xiao; Jinho Hwang; Nikos Anerousis

In the last few years, changing infrastructure and business requirements are forcing enterprises to rethink their networks. Enterprises look for network infrastructures that increase network efficiency, flexibility, and cost reduction. At the same time, the emergence of Cloud and mobile in enterprise networks has introduced tremendous variability in enterprise traffic patterns at the edge. This highly mobile and dynamic traffic presents a need for dynamic capacity management and adaptive traffic steering and appeals for new infrastructures and management solutions. In this context, passive optical networks (PON) have gained attention in the last few years as a promising solution for enterprise networks, as it can offer efficiency, security, and cost reduction. However, network management in PON is not yet automated and needs humain intervention. As such, capabilities for dynamic and adaptive PON are necessary. In this paper, we present a joint solution for PON capacity management both in deployment and in operation, as to maximize peak load tolerance by dynamically allocating capacity to fit varying and migratory traffic loads. To this end, we developed the novel approaches of capacity pool based deployment and dynamic traffic steering in PON. Compared with traditional edge network design, our approach significantly reduces the need for capacity over-provisioning. Compared with generic PON networks, our approach enables dynamic traffic steering through software-defined control. We implemented our design on a production grade PON testbed, and the results demonstrate the feasibility and flexibility of our approach.


international conference on service oriented computing | 2016

FitScale: Scalability of Legacy Applications Through Migration to Cloud

Jinho Hwang; Maja Vukovic; Nikos Anerousis

One of the key benefits of Cloud computing is elasticity, the ability of the system infrastructure to adapt to the workload changes by automatically adjusting the resources on-demand. Horizontal scaling refers to the method of adding or removing resources from the resource pool. As such it is appealing to enterprises who seek to migrate their legacy systems as it requires no application rewrite or refactoring. Vertical scaling approach offers a mechanism to maintain continuous performance while reducing resource cost through reconfiguration of the resource. The challenge is, however, in being able to automatically identify the right size of the target resource such as a VM or a container. Moreover, choice of scalability policies is not intuitive due to application complexity, topology and variability in system performance parameters that need to be considered.


conference on network and service management | 2015

Computing resource transformation, consolidation and decomposition in hybrid clouds

Jinho Hwang

With the promise of providing flexible and elastic computing resources on demand, the cloud computing has been attracting enterprises and individuals to migrate workloads in the legacy environment to the public/private/hybrid clouds. Also, cloud customers want to migrate between cloud providers with different requirements such as cost, performance, and manageability. However the workload migration is often interpreted as an image migration or re-installation/data copying as the exact snapshot of the source machine. Also the various cloud platforms and service models are rarely taken into consideration during the migration analytics. Therefore, although the expectation has risen with various requirements on the target cloud platforms and environments, the cloud migration techniques have not provided enough options that can satisfy the various requirements. In this paper we propose a model to tackle the migration challenges that transform one resource into same or another resource in hybrid clouds. We formulate the problem as a constraint satisfaction problem, and iteratively decompose the server components and consolidate the servers. The ultimate goal is to recommend the optimal target cloud platform and environment with the minimum cost. Through the evaluation of the proposed model using the real enterprise dataset (up to 2012 machines), we prove that the proposed model satisfies the goal. We show that when migrating into virtualized cloud environments, the thorough resource planning can reduce 16% of current resources, about 5%-10% servers can be consolidated, and more than 60% servers are possible candidates for server decomposition.


network operations and management symposium | 2016

Cloud migration using automated planning

Maja Vukovic; Jinho Hwang

Cloud migration transforms companys data, applications and services to (or between) one or more other Cloud environments. Enterprises are increasingly migrating their IT infrastructures to Cloud, given the appeal of (pay-per-use) elastic resources. Yet, existing IT infrastructures are complex, heterogeneous and dynamic ecosystems. As a result, there is no single standardized process to seamlessly manage migration at enterprise scale, and often significant level of manual intervention is required, both in reasoning about migration and during its execution. This paper presents a system that automates the process of migration to Cloud. It embeds a Metric-FF Artificial Intelligence (AI) planning algorithm to dynamically assemble migration plans based on the properties of source and target environments, as well as available migration tooling. The paper describes the challenges in migration planning, AI domain design for migration. This work demonstrates that the system provides an effective and scalable solution to generating plans based on the source environment of 700 servers, and varying size of the migration service requests.


international conference on service oriented computing | 2016

BlueSight: Automated Discovery Service for Cloud Migration of Enterprises

Dannver Wu; Jinho Hwang; Maja Vukovic; Nikos Anerousis

Migrating legacy enterprise infrastructures to the cloud is highly desirable due to greater versatility, lower management costs, as well as improved scalability. However, the large scale of these systems makes transforming the current architecture a long and difficult process that involves weeks or even months of manual collection and analysis of data. BlueSight serves to expedite and simplify this process by collecting the data through an agentless process and analyzing the collected data to determine which and how applications should migrate.


Immunotechnology | 2017

BlueShift: Automated application transformation to Cloud Native architectures

Maja Vukovic; Jinho Hwang; John J. Rofrano; Nikos Anerousis

▪ Presented BlueShift: Self-service for orchestrating automated tasks (via APIs) and human tasks for end-to-end transformation process including, application discovery, analysis, artifact transformation and enablement of cloud value-add services. ▪ Demonstrated transformation process for PlantsByWebSphere application, with Liberty Profile runtime as a target in BlueMix (cloud-foundry based platform). ▪ Discussed challenges arising in transformation from application complexity and non-functional requirements.


Immunotechnology | 2017

Task assignment optimization in geographically distributed data centers

Bowu Zhang; Jinho Hwang

Recent advance in geo-distributed systems has made distributed data processing possible, where tasks are decomposed into subtasks, deployed into multiple data centers and run in parallel. Compared to conventional approaches that process every task in a single datacenter resulting in high latency and large data aggregation, the geo-distributed cloud systems provide a highly available and more economic platform. However, distributed application (task) execution introduces extra cost and latency as data need to be exchanged between data centers. In addition, task dependency and diverse task constraints make it even more challenging to choose an appropriate task assignment strategy. In this paper, we discuss a task assignment problem in geographically distributed cloud systems. In light of growing demand from big data processing and storage, we consider data intensive tasks where a task often requires significant computing resources and its input data typically located in multiple data centers. By taking the distributed input, task dependency, heterogeneous pricing scheme, and resource constraints into account, we aim to optimize the performance when deploying tasks in geo-graphically distributed data centers. A heuristic algorithm is presented to provide an approximate solution to the proposed NP-hard problem. We perform an extensive simulation study to evaluate the performance of our solution under various settings. The simulation results demonstrate that our approach can outperform the state-of-the-art strategies, and achieve significant reduction in cost and latency.

Collaboration


Dive into the Jinho Hwang's collaboration.

Researchain Logo
Decentralizing Knowledge