Yuchung Cheng
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuchung Cheng.
acm special interest group on data communication | 2010
Nandita Dukkipati; Tiziana Refice; Yuchung Cheng; Jerry Chu; Tom Herbert; Amit Agarwal; Arvind Jain; Natalia Sutin
TCP flows start with an initial congestion window of at most four segments or approximately 4KB of data. Because most Web transactions are short-lived, the initial congestion window is a critical TCP parameter in determining how quickly flows can finish. While the global network access speeds increased dramatically on average in the past decade, the standard value of TCPs initial congestion window has remained unchanged. In this paper, we propose to increase TCPs initial congestion window to at least ten segments (about 15KB). Through large-scale Internet experiments, we quantify the latency benefits and costs of using a larger window, as functions of network bandwidth, round-trip time (RTT), bandwidth-delay product (BDP), and nature of applications. We show that the average latency of HTTP responses improved by approximately 10% with the largest benefits being demonstrated in high RTT and BDP networks. The latency of low bandwidth networks also improved by a significant amount in our experiments. The average retransmission rate increased by a modest 0.5%, with most of the increase coming from applications that effectively circumvent TCPs slow start algorithm by using multiple concurrent connections. Based on the results from our experiments, we believe the initial congestion window should be at least ten segments and the same be investigated for standardization by the IETF.
acm special interest group on data communication | 2013
Tobias Flach; Nandita Dukkipati; Andreas Terzis; Barath Raghavan; Neal Cardwell; Yuchung Cheng; Ankur Jain; Shuai Hao; Ethan Katz-Bassett; Ramesh Govindan
To serve users quickly, Web service providers build infrastructure closer to clients and use multi-stage transport connections. Although these changes reduce client-perceived round-trip times, TCPs current mechanisms fundamentally limit latency improvements. We performed a measurement study of a large Web service provider and found that, while connections with no loss complete close to the ideal latency of one round-trip time, TCPs timeout-driven recovery causes transfers with loss to take five times longer on average. In this paper, we present the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery. Proactive, Reactive, and Corrective are three qualitatively-different, easily-deployable mechanisms that (1) proactively recover from losses, (2) recover from them as quickly as possible, and (3) reconstruct packets to mask loss. Crucially, the mechanisms are compatible both with middleboxes and with TCPs existing congestion control and loss recovery. Our large-scale experiments on Googles production network that serves billions of flows demonstrate a 23% decrease in the mean and 47% in 99th percentile latency over todays TCP.
conference on emerging network experiment and technology | 2011
Sivasankar Radhakrishnan; Yuchung Cheng; Jerry Chu; Arvind Jain; Barath Raghavan
Todays web services are dominated by TCP flows so short that they terminate a few round trips after handshaking; this handshake is a significant source of latency for such flows. In this paper we describe the design, implementation, and deployment of the TCP Fast Open protocol, a new mechanism that enables data exchange during TCPs initial handshake. In doing so, TCP Fast Open decreases application network latency by one full round-trip time, decreasing the delay experienced by such short TCP transfers. We address the security issues inherent in allowing data exchange during the three-way handshake, which we mitigate using a security token that verifies IP address ownership. We detail other fall-back defense mechanisms and address issues we faced with middleboxes, backwards compatibility for existing network stacks, and incremental deployment. Based on traffic analysis and network emulation, we show that TCP Fast Open would decrease HTTP transaction network latency by 15% and whole-page load time over 10% on average, and in some cases up to 40%.
ACM Queue | 2016
Neal Cardwell; Yuchung Cheng; C. Stephen Gunn; Soheil Hassas Yeganeh; Van Jacobson
When bottleneck buffers are large, loss-based congestion control keeps them full, causing bufferbloat. When bottleneck buffers are small, loss-based congestion control misinterprets loss as a signal of congestion, leading to low throughput. Fixing these problems requires an alternative to loss-based congestion control. Finding this alternative requires an understanding of where and how network congestion originates.
internet measurement conference | 2011
Nandita Dukkipati; Matt Mathis; Yuchung Cheng; Monia Ghobadi
Packet losses increase latency for Web users. Fast recovery is a key mechanism for TCP to recover from packet losses. In this paper, we explore some of the weaknesses of the standard algorithm described in RFC 3517 and the non-standard algorithms implemented in Linux. We find that these algorithms deviate from their intended behavior in the real world due to the combined effect of short flows, application stalls, burst losses, acknowledgment (ACK) loss and reordering, and stretch ACKs. Linux suffers from excessive congestion window reductions while RFC 3517 transmits large bursts under high losses, both of which harm the rest of the flow and increase Web latency. Our primary contribution is a new design to control transmission in fast recovery called proportional rate reduction (PRR). PRR recovers from losses quickly, smoothly and accurately by pacing out retransmissions across received ACKs. In addition to PRR, we evaluate the TCP early retransmit (ER) algorithm which lowers the duplicate acknowledgment threshold for short transfers, and show that delaying early retransmissions for a short interval is effective in avoiding spurious retransmissions in the presence of a small degree of reordering. PRR and ER reduce the TCP latency of connections experiencing losses by 3-10% depending on the response size. Based on our instrumentation on Google Web and YouTube servers in U.S. and India, we also present key statistics on the nature of TCP retransmissions.
acm special interest group on data communication | 2016
Tobias Flach; Pavlos Papageorge; Andreas Terzis; Luis Pedrosa; Yuchung Cheng; Tayeb Karim; Ethan Katz-Bassett; Ramesh Govindan
Large flows like videos consume significant bandwidth. Some ISPs actively manage these high volume flows with techniques like policing, which enforces a flow rate by dropping excess traffic. While the existence of policing is well known, our contribution is an Internet-wide study quantifying its prevalence and impact on video quality metrics. We developed a heuristic to identify policing from server-side traces and built a pipeline to deploy it at scale on traces from a large online content provider, collected from hundreds of servers worldwide. Using a dataset of 270 billion packets served to 28,400 client ASes, we find that, depending on region, up to 7% of lossy transfers are policed. Loss rates are on average six times higher when a trace is policed, and it impacts video playback quality. We show that alternatives to policing, like pacing and shaping, can achieve traffic management goals while avoiding the deleterious effects of policing.
Computer Communication Review | 2018
Nandita Dukkipati; Yuchung Cheng; Amin Vahdat
Many algorithms proposed in networking research papers are widely used in many areas, including Congestion Control, Routing, Traffic Engineering, and Load Balancing. In this paper, we present algorithmic advancements that have impacted the practice of Congestion Control (CC) in datacenters and the Internet. Where possible, we also describe negative examples, ideas that looked promising on paper or in simulations but that performed poorly in practice. We conclude the paper with observations on the characteristics shared by these ideas in taking them from research to impacting practice.
usenix annual technical conference | 2012
Monia Ghobadi; Yuchung Cheng; Ankur Jain; Matt Mathis
RFC | 2013
Jerry Chu; Nandita Dukkipati; Yuchung Cheng; Matt Mathis
Archive | 2013
Yuchung Cheng; Neal Cardwell; Nandita Dukkipati; Matt Mathis