Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keon Jang is active.

Publication


Featured researches published by Keon Jang.


symposium on operating systems principles | 2015

E2: a framework for NFV applications

Shoumik Palkar; Chang Lan; Sangjin Han; Keon Jang; Aurojit Panda; Sylvia Ratnasamy; Luigi Rizzo; Scott Shenker

By moving network appliance functionality from proprietary hardware to software, Network Function Virtualization promises to bring the advantages of cloud computing to network packet processing. However, the evolution of cloud computing (particularly for data analytics) has greatly benefited from application-independent methods for scaling and placement that achieve high efficiency while relieving programmers of these burdens. NFV has no such general management solutions. In this paper, we present a scalable and application-agnostic scheduling framework for packet processing, and compare its performance to current approaches.


Proceedings of the 1st ACM workshop on Mobile internet through cellular networks | 2009

3G and 3.5G wireless network performance measured from moving cars and high-speed trains

Keon Jang; Mongnam Han; Soohyun Cho; Hyung-Keun Ryu; Jaehwa Lee; Yeongseok Lee; Sue B. Moon

In recent years, the world has witnessed the deployment of several 3G and 3.5G wireless networks based on technologies such as CDMA 1x EVolution Data-Only (EVDO), High-Speed Downlink Packet Access (HSDPA), and mobile WiMax (e.g., WiBro). Although 3G and 3.5G wireless networks support enough bandwidth for typical Internet applications, their performance varies greatly due to the wireless link characteristics. We present a measurement analysis of the performance of UDP and TCP over 3G and 3.5G wireless networks. The novelty of our measurement experiments lies in the fact that we took our measurements in a fast moving car on a highway and in a high-speed train running at 300 km/h. Predictably, our results show that mobile nodes experience far worse performance than stationary nodes over the same network.


acm special interest group on data communication | 2015

Silo: Predictable Message Latency in the Cloud

Keon Jang; Justine Sherry; Hitesh Ballani; Toby Moncaster

Many cloud applications can benefit from guaranteed latency for their network messages, however providing such predictability is hard, especially in multi-tenant datacenters. We identify three key requirements for such predictability: guaranteed network bandwidth, guaranteed packet delay and guaranteed burst allowance. We present Silo, a system that offers these guarantees in multi-tenant datacenters. Silo leverages the tight coupling between bandwidth and delay: controlling tenant bandwidth leads to deterministic bounds on network queuing delay. Silo builds upon network calculus to place tenant VMs with competing requirements such that they can coexist. A novel hypervisor-based policing mechanism achieves packet pacing at sub-microsecond granularity, ensuring tenants do not exceed their allowances. We have implemented a Silo prototype comprising a VM placement manager and a Windows filter driver. Silo does not require any changes to applications, guest OSes or network switches. We show that Silo can ensure predictable message latency for cloud applications while imposing low overhead.


european conference on computer systems | 2015

NBA (network balancing act): a high-performance packet processing framework for heterogeneous processors

Joongi Kim; Keon Jang; Keunhong Lee; Sangwook Ma; Junhyun Shim; Sue B. Moon

We present the NBA framework, which extends the architecture of the Click modular router to exploit modern hardware, adapts to different hardware configurations, and reaches close to their maximum performance without manual optimization. NBA takes advantages of existing performance-excavating solutions such as batch processing, NUMA-aware memory management, and receive-side scaling with multi-queue network cards. Its abstraction resembles Click but also hides the details of architecture-specific optimization, batch processing that handles the path diversity of individual packets, CPU/GPU load balancing, and complex hardware resource mappings due to multi-core CPUs and multi-queue network cards. We have implemented four sample applications: an IPv4 and an IPv6 router, an IPsec encryption gateway, and an intrusion detection system (IDS) with Aho-Corasik and regular expression matching. The IPv4/IPv6 router performance reaches the line rate on a commodity 80 Gbps machine, and the performances of the IPsec gateway and the IDS reaches above 30 Gbps. We also show that our adaptive CPU/GPU load balancer reaches near-optimal throughput in various combinations of sample applications and traffic conditions.


asia pacific workshop on systems | 2012

The power of batching in the Click modular router

Joongi Kim; Seonggu Huh; Keon Jang; KyoungSoo Park; Sue B. Moon

The Click modular router has been one of the most popular software router platforms for rapid prototyping and new protocol development. Unfortunately, its internal architecture has not caught up with recent hardware advancements, and the performance remains sub-optimal in high-speed networks despite its benefit of flexible module composition. In this work, we identify the performance bottlenecks of the existing Click router and extend it to scale with modern computer systems. Our improvements focus on both I/O and computation batching, and include various optimizations for multi-core systems and multi-queue network cards. We find that these techniques improve the performance by almost a factor of 10, and the maximum throughput reaches 28 Gbps of minimum-sized IPv4 packet forwarding speed on a single machine.


passive and active network measurement | 2008

Evaluation of VoIP quality over WiBro

Mongnam Han; Youngseok Lee; Sue B. Moon; Keon Jang; D. K. Lee

In this work, we have conducted experiments to evaluate QoS of VoIP applications over the WiBro network. In order to capture the baseline performance of the WiBro network we measure and analyze the characteristics of delay and throughput under stationary and mobile scenarios. Then we evaluate QoS of VoIP applications using the E--Model of ITU--T G.107. Our measurements show that the achievable maximum throughputs are 5.3 Mbps in downlink and 2 Mbps in uplink. VoIP quality is better than or at least as good as toll quality despite user mobility exceeding the protected limit of WiBro mobility support. Using RAS and sector identification information, we show that the handoff is correlated with throughput and quality degradation.


workshop on local and metropolitan area networks | 2010

Building a single-box 100 Gbps software router

Sangjin Han; Keon Jang; KyoungSoo Park; Sue B. Moon

Commodity-hardware technology has advanced in great leaps in terms of CPU, memory, and I/O bus speeds. Benefiting from the hardware innovation, recent software routers on commodity PC now report about 10 Gbps in packet routing. In this paper we map out expected hurdles and projected speed-ups to reach 100 Gbps in packet routing on a single commodity PC. With careful measurements, we identify two notable bottlenecks for our goal: CPU cycles and I/O bandwidth. For the former, we propose reducing per-packet processing overhead with software-level optimizations and buying extra computing power with GPUs. To improve the I/O bandwidth, we suggest scaling the performance of I/O hubs that limits packet routing speed to well before 50 Gbps.


acm special interest group on data communication | 2010

Accelerating SSL with GPUs

Keon Jang; Sangjin Han; Seungyeop Han; Sue B. Moon; KyoungSoo Park

SSL/TLS is a standard protocol for secure Internet communication. Despite its great success, todays SSL deployment is largely limited to security-critical domains. The low adoption rate of SSL is mainly due to high computation overhead on the server side. In this paper, we propose Graphics Processing Units (GPUs) as a new source of computing power to reduce the server-side overhead. We have designed and implemented an SSL proxy that opportunistically offloads cryptographic operations to GPUs. The evaluation results show that our GPU implementation of cryptographic operations, RSA, AES, and HMAC-SHA1, achieves high throughput while keeping the latency low. The SSL proxy significantly boosts the throughput of SSL transactions, handling 25.8K SSL transactions per second, and has comparable response time even when overloaded.


acm special interest group on data communication | 2012

Reviving delay-based TCP for data centers

Changhyun Lee; Keon Jang; Sue-Bok Moon

With the rapid growth of data centers, minimizing the queueing delay at network switches has been one of the key challenges. In this work, we analyze the shortcomings of the current TCP algorithm when used in data center networks, and we propose to use latency-based congestion detection and rate-based transfer to achieve ultra-low queueing delay in data centers.


acm special interest group on data communication | 2017

Credit-Scheduled Delay-Bounded Congestion Control for Datacenters

Inho Cho; Keon Jang; Dongsu Han

Small RTTs (~tens of microseconds), bursty flow arrivals, and a large number of concurrent flows (thousands) in datacenters bring fundamental challenges to congestion control as they either force a flow to send at most one packet per RTT or induce a large queue build-up. The widespread use of shallow buffered switches also makes the problem more challenging with hosts generating many flows in bursts. In addition, as link speeds increase, algorithms that gradually probe for bandwidth take a long time to reach the fair-share. An ideal datacenter congestion control must provide 1) zero data loss, 2) fast convergence, 3) low buffer occupancy, and 4) high utilization. However, these requirements present conflicting goals. This paper presents a new radical approach, called ExpressPass, an end-to-end credit-scheduled, delay-bounded congestion control for datacenters. ExpressPass uses credit packets to control congestion even before sending data packets, which enables us to achieve bounded delay and fast convergence. It gracefully handles bursty flow arrivals. We implement ExpressPass using commodity switches and provide evaluations using testbed experiments and simulations. ExpressPass converges up to 80 times faster than DCTCP in 10 Gbps links, and the gap increases as link speeds become faster. It greatly improves performance under heavy incast workloads and significantly reduces the flow completion times, especially, for small and medium size flows compared to RCP, DCTCP, HULL, and DX under realistic workloads.

Collaboration


Dive into the Keon Jang's collaboration.

Top Co-Authors

Avatar

Sangjin Han

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Youngseok Lee

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar

Aurojit Panda

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge