Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chui-Hui Chiu is active.

Publication


Featured researches published by Chui-Hui Chiu.


Proceedings of the 2012 workshop on Cloud services, federation, and the 8th open cirrus summit | 2012

Network-aware scheduling of mapreduce framework ondistributed clusters over high speed networks

Praveenkumar Kondikoppa; Chui-Hui Chiu; Cheng Cui; Lin Xue; Seung-Jong Park

Googles MapReduce has gained significant popularity as a platform for large scale distributed data processing. Hadoop [1] is an open source implementation of MapReduce [11] framework, originally it was developed to operate over single cluster environment and could not be leveraged for distributed data processing across federated clusters. At multiple federated clusters connected with high speed networks, computing resources are provisioned from any of the clusters from the federation. Placement of map tasks close to its data split is critical for performance of Hadoop. In this work, we add network awareness in Hadoop while scheduling the map tasks over federated clusters. We observe 12% to 15 % reduction of execution time in FIFO and FAIR schedulers of Hadoop for varying workloads.


international conference on computer communications and networks | 2013

AFCD: An Approximated-Fair and Controlled-Delay Queuing for High Speed Networks

Lin Xue; Suman Kumar; Cheng Cui; Praveenkumar Kondikoppa; Chui-Hui Chiu; Seung-Jong Park

High speed networks have characteristics of high bandwidth, long queuing delay, and high burstiness which make it difficult to address issues such as fairness, low queuing delay and high link utilization. Current high speed networks carry heterogeneous TCP flows which makes it even more challenging to address these issues. Since sender centric approaches do not meet these challenges, there have been several proposals to address them at router level via queue management (QM) schemes. These QM schemes have been fairly successful in addressing either fairness issues or large queuing delay but not both at the same time. We propose a new QM scheme called Approximated-Fair and Controlled-Delay (AFCD) queuing for high speed networks that aims to meet following design goals: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. It uses very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. Through experimental evaluation in a 10Gbps high speed networking environment, we show AFCD meets our design goals by maintaining approximated fair share of bandwidth among flows and ensuring a controlled very low queuing delay with a comparable link utilization.


Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016

BIC-LSU: Big Data Research Integration with Cyberinfrastructure for LSU

Chui-Hui Chiu; Nathan Lewis; Dipak Kumar Singh; Arghya Kusum Das; Mohammad M. Jalazai; Richard Platania; Sayan Goswami; Kisung Lee; Seung-Jong Park

In recent years, big data analysis has been widely applied to many research fields including biology, physics, transportation, and material science. Even though the demands for big data migration and big data analysis are dramatically increasing in campus IT infrastructures, there are several technical challenges that need to be addressed. First of all, frequent big data transmission between storage systems in different research groups imposes heavy burdens on a regular campus network. Second, the current campus IT infrastructure is not designed to fully utilize the hardware capacity for big data migration and analysis. Last but not the least, running big data applications on top of large-scale high-performance computing facilities is not straightforward, especially for researchers and engineers in non-IT disciplines. We develop a campus IT cyberinfrastructure for big data migration and analysis, called BIC-LSU, which consists of a task-aware Clos OpenFlow network, high-performance cache storage servers, customized high-performance transfer applications, a light-weight control framework to manipulate existing big data storage systems and job scheduling systems, and a comprehensive social networking-enabled web portal. BIC-LSU achieves 40Gb/s disk-to-disk big data transmission, maintains short average transmission task completion time, enables the convergence of control on commonly deployed storage and job scheduling systems, and enhances easiness of big data analysis with a universal user-friendly interface. BIC-LSU software requires minimum dependencies and has high extensibility. Other research institutes can easily customize and deploy BIC-LSU as an augmented service on their existing IT infrastructures.


Journal of Network and Computer Applications | 2016

Towards fair and low latency next generation high speed networks

Lin Xue; Suman Kumar; Cheng Cui; Praveenkumar Kondikoppa; Chui-Hui Chiu; Seung-Jong Park

In packet switched high-speed networks, heterogeneous nature of TCP flows, a relatively newer characteristics of IP networks, and high burstiness have made it difficult to achieve low queuing delay and fair allocation of bandwidth among flows. Existing queue management (QM) schemes were designed to achieve either one or the other or both simultaneously and have been fairly successful at meeting either fairness or low queuing delay but not both at the same time. In this paper, unlike previous research efforts, the two requirements, fairness and low queuing delay are decoupled and addressed separately. We propose Approximated-Fair and Controlled-Delay (AFCD) queuing for next generation high speed networks that aims to meet following design goals: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. It uses very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. Through experimental evaluation in a 10Gbps high speed networking environment, we show AFCD meets our design goals by maintaining approximated fair share of bandwidth among flows and ensuring a controlled very low queuing delay with a comparable link utilization. AFCD is locally stable for small target delay in a high speed networking environment.


international conference on computer communications and networks | 2014

DMCTCP: Desynchronized Multi-Channel TCP for high speed access networks with tiny buffers

Cheng Cui; Lin Xue; Chui-Hui Chiu; Praveenkumar Kondikoppa; Seung-Jong Park

The past few years have witnessed debate on how to improve link utilization of high-speed tiny-size buffer routers. Widely argued proposals for TCP traffic to realize acceptable link capacities mandate: (i) over-provisioned core link bandwidth; and (ii) non-bursty flows; and (iii) tens of thousands of asynchronous flows. However, in high speed access networks where flows are bursty, sparse and synchronous, TCP traffic suffer severely from routers with tiny buffers. We propose a new congestion control algorithm called Desyn-chronized Multi-Channel TCP (DMCTCP) that creates a flow with multiple channels. It avoids TCP loss synchronization by desynchronizing channels, and it avoids sending rate penalties from burst losses. Over a 10 Gb/s large delay network ruled by routers with only a few dozen packets of buffers, our results show that bottleneck link utilization can reach 80% with only 100 flows. Our study is a new step towards the deployment of optical packet switching networks.


Concurrency and Computation: Practice and Experience | 2017

Hadoop-based replica exchange over heterogeneous distributed cyberinfrastructures: HARE: HADOOP-BASED REPLICA EXCHANGE

Richard Platania; Shayan Shams; Chui-Hui Chiu; Nayong Kim; Joohyun Kim; Seung-Jong Park

We present Hadoop‐based replica exchange (HaRE), a Hadoop‐based implementation of the replica exchange scheme developed primarily for replica exchange statistical temperature molecular dynamics, an example of a large‐scale, advanced sampling molecular dynamics simulation. By using Hadoop as a framework and the MapReduce model for driving replica exchange, an efficient task‐level parallelism is introduced to replica exchange statistical temperature molecular dynamics simulations. In order to demonstrate this, we investigate the performance of our application over various distributed cyberinfrastructures (DCI), including several high‐performance computing systems, our cyberinfrastructure for reconfigurable optical networks testbed, the global environment for network innovations testbed, and the CloudLab testbed. Scalability performance analysis is shown in terms of scale‐out and scale‐up over a single high‐performance computing cluster, EC2, and CloudLab and scale‐across with cyberinfrastructure for reconfigurable optical networks and global environment for network innovations. As a result, we demonstrate that HaRE is capable of efficient execution over both homogeneous and heterogeneous DCI of varying size and configuration. Contributing factors to performance are discussed in order to provide insight towards the effects of computing environment on the execution of HaRE. With these contributions, we propose that similar loosely coupled scientific applications can also take advantage of the scalable, task‐level parallelism Hadoop MapReduce provides over various DCI. Copyright


2015 International Conference on Computing, Networking and Communications (ICNC) | 2015

FaLL: A fair and low latency queuing scheme for data center networks

Lin Xue; Chui-Hui Chiu; Suman Kumar; Praveenkumar Kondikoppa; Seung-Jong Park

Low latency, high throughput, and fairness are the stringent performance requirements of data center networks. For web applications continue to thrive, data center must remain effective by addressing these challenges. While most of the challenges have been handled by data center specific transport protocols (e.g. DCTCP), these transport protocols are not default options for data center users. Free choice of transport protocols demands careful consideration for an effective solution. We argue that queuing scheme at layer-2 switches hold the key to address these issues. Therefore, in this paper, we present FaLL, a queuing scheme for data center networks that meets following performance requirements: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. FaLL requires very small amount of state information. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions.


international conference on cloud computing | 2017

Minimal Coflow Routing and Scheduling in OpenFlow-Based Cloud Storage Area Networks

Chui-Hui Chiu; Dipak Kumar Singh; Qingyang Wang; Kisung Lee; Seung-Jong Park

Researches affirm that coflow scheduling/routing substantially shortens the average application inner communication time in data center networks(DCNs). The commonly desirable critical features of existing coflow scheduling/routing framework includes (1) coflow scheduling, (2) coflow routing, and (3) per-flow rate-limiting. However, to provide the 3 features, existing frameworks require customized computing frameworks, customized operating systems, or specific external commercial monitoring frameworks on software-defined networking(SDN) switches. These requirements defer or even prohibit the deployment of coflow scheduling/routing in production DCNs. In this paper, we design a coflow scheduling and routing framework, MinCOF which has minimal requirements on hosts and switches for cloud storage area networks(SANs) based on OpenFlow SDN. MinCOF accommodates all critical features of coflow scheduling/routing from previous works. The deployability in production environment is especially taken into consideration. The OpenFlow architecture is capable of processing the traffic load in a cloud SAN. Not necessary requirements for hosts from existing frameworks are migrated to the mature commodity OpenFlow 1.3 Switch and our coflow scheduler. Transfer applications on hosts only need slight enhancements on their existing connection establishment and progress reporting functions. Evaluations reveal that MinCOF decreases the average coflow completion time (CCT) by 12.94% compared to the latest OpenFlow-based coflow scheduling and routing framework.


international conference on cloud computing | 2017

Coflourish: An SDN-Assisted Coflow Scheduling Framework for Clouds

Chui-Hui Chiu; Dipak Kumar Singh; Qingyang Wang; Seung-Jong Park

Existing coflow scheduling frameworks effectively shorten communication time and completion time of cluster applications. However, existing frameworks only consider available bandwidth on hosts and overlook congestion in the network when making scheduling decisions. Through extensive simulations using the realistic workload probability distribution from Facebook, we observe the performance degradation of the state-of-the-art coflow scheduling framework, Varys, in the cloud environment on a shared data center network (DCN) because of the lack of network congestion information. We propose Coflourish, the first coflow scheduling framework that exploits the congestion feedback assistances from the software-defined-networking(SDN)-enabled switches in the networks for available bandwidth estimation. Our simulation results demonstrate that Coflourish outperforms Varys by up to 75.5% in terms of average coflow completion time under various workload conditions. The proposed work also reveals the potentials of integration with traffic engineering mechanisms in lower levels for further performance optimization.


Computer Communications | 2015

Exploring parallelism and desynchronization of TCP over high speed networks with tiny buffers

Cheng Cui; Lin Xue; Chui-Hui Chiu; Praveenkumar Kondikoppa; Seung-Jong Park

We identified a problem that the current TCP variants suffer from synchronization caused by tiny buffers at high speed networks.We proposed and implement a new TCP that explores parallelism among multiple channels and TCP desynchronization for high speed networks.We evaluated the performance of the new DMCTCP over different networking environment.We analyzed the optimal performance of the proposed DMCTCP based on theoretical analysis. The buffer sizing problem is a big challenge for high speed network routers to reduce buffer cost without throughput loss. The past few years have witnessed debate on how to improve link utilization of high speed networks where the router buffer size is idealized into dozens of packets. Theoretically, the buffer size can be shrunk by more than 100 times. Under this scenario, widely argued proposals for TCP traffic to achieve acceptable link capacities mandate three necessary conditions: over-provisioned core link bandwidth, non-bursty flows, and tens of thousands of asynchronous flows. However, in high speed networks where these conditions are insufficient, TCP traffic suffers severely from routers with tiny buffers.To explore better performance, we propose a new congestion control algorithm called Desynchronized Multi-Channel TCP (DMCTCP) that creates a flow with parallel channels. These channels can desynchronize each other to avoid TCP loss synchronization, and they can avoid traffic penalties from burst losses. Over a 10?Gb/s large delay network ruled by tiny buffer routers, our emulation results show that bottleneck link utilization can reach over 80% with much fewer number of flows. Compared with other TCP congestion control variants, DMCTCP can also achieve much better performance in high loss rate networks. Facing the buffer sizing challenge, our study is a new step towards the deployment of optical packet switching networks.

Collaboration


Dive into the Chui-Hui Chiu's collaboration.

Top Co-Authors

Avatar

Seung-Jong Park

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Xue

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Cheng Cui

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Dipak Kumar Singh

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Richard Platania

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joohyun Kim

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Kisung Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nayong Kim

Louisiana State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge