Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David X. Wei is active.

Publication


Featured researches published by David X. Wei.


IEEE ACM Transactions on Networking | 2006

FAST TCP: motivation, architecture, algorithms, performance

David X. Wei; Cheng Jin; Steven H. Low; Sanjay Hegde

We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties which the current TCP implementation has at large windows. We describe the architecture and summarize some of the algorithms implemented in our prototype. We characterize its equilibrium and stability properties. We evaluate it experimentally in terms of throughput, fairness, stability, and responsiveness


IEEE Network | 2005

FAST TCP: from theory to experiments

Cheng Jin; David X. Wei; Steven H. Low; J. Bunn; Hyojeong Choe; J.C. Doylle; Harvey B Newman; Sylvain Ravot; S. Singh; Fernando Paganini; G. Buhrmaster; L. Cottrell; Olivier Martin; Wu-chun Feng

We describe a variant of TCP, called FAST, that can sustain high throughput and utilization at multigigabits per second over large distances. We present the motivation, review the background theory, summarize key features of FAST TCP, and report our first experimental results.


international conference on computer communications | 2005

Modelling and stability of FAST TCP

Jiantao Wang; David X. Wei; Steven H. Low

We introduce a discrete-time model of FAST TCP that fully captures the effect of self-clocking and compare it with the traditional continuous-time model. While the continuous-time model predicts instability for homogeneous sources sharing a single link when feedback delay is large, experiments suggest otherwise. Using the discrete-time model, we prove that FAST TCP is locally asymptotically stable in general networks when all sources have a common round-trip feedback delay, no matter how large the delay is. We also prove global stability for a single bottleneck link in the absence of feedback delay. The techniques developed here are new and applicable to other protocols.


Proceeding from the 2006 workshop on ns-2: the IP network simulator | 2006

NS-2 TCP-Linux: an NS-2 TCP implementation with congestion control algorithms from Linux

David X. Wei; Pei Cao

This paper introduces NS-2 TCP-Linux, a new NS-2 TCP implementation that embeds the source code of TCP congestion control modules from Linux kernels. Compared to existing NS-2 TCP implementations, NS-2 TCP-Linux has three improvements: 1) a standard interface for congestion control algorithms similar to that in Linux 2.6, ensuring better extensibility for emerging congestion control algorithms; 2) a redesigned loss detection module (i.e. Scoreboard) that is more accurate; and 3) a new event queue scheduler that increases the simulation speed. As a result, NS-2 TCP-Linux is more extensible, runs faster and produces simulation results that are much closer to the actual TCP behavior of Linux. In addition to helping the network research community, NS-2 TCP-Linux will also help the Linux kernel community to debug and test their new congestion control algorithms.In this paper, we explain the design of NS-2 TCP-Linux. We also present a preliminary evaluation of three aspects of NS-2 TCP-Linux: extensibility to new congestion control algorithms, accuracy of the simulation results and simulation performance in terms of simulation speed and memory usage.Based on these results, we strongly believe that NS-2 TCP-Linux is a promising alternative or even a replacement for existing TCP implementations in NS-2. We call for participation to test and improve this new TCP implementation.


measurement and modeling of computer systems | 2002

High-density model for server allocation and placement

Craig Cameron; Steven H. Low; David X. Wei

It is well known that optimal server placement is NP-hard. We present an approximate model for the case when both clients and servers are dense, and propose a simple server allocation and placement algorithm based on high-rate vector quantization theory. The key idea is to regard the location of a request as a random variable with probability density that is proportional to the demand at that location, and the problem of server placement as source coding, i.e., to optimally map a source value (request location) to a code-word (server location) to minimize distortion (network cost). This view has led to a joint server allocation and placement algorithm that has a time-complexity that is linear in the number of clients. Simulations are presented to illustrate its performance.


2002 14th International Conference on Ion Implantation Technology Proceedings (IEEE Cat. No.02EX505) | 2003

The case for delay-based congestion control

Cheng Jin; David X. Wei; Steven H. Low

We argue that, in the absence of explicit feedback, delay-based algorithms become the preferred approach for end-to-end congestion control as networks scale up in capacity. Their advantage is small at low speed but decisive at high speed. The distinction between packet-level and flow-level problems of the current TCP exposes the difficulty of loss-based algorithms at large congestion windows.


conference on decision and control | 2006

Global Exponential Stability of FAST TCP

Joon-Young Choi; Kyungmo Koo; David X. Wei; Jin S. Lee; Steven H. Low

We consider a single-link multi-source network with the FAST TCP sources. We propose a continuous-time dynamic model for the FAST TCP sources and a static model to describe the queuing delay behavior at the link. The proposed model turns out to be in a form revealing the network feedback delay, which allows us to analyze FAST TCP in due consideration of the network feedback delay. Based on the proposed model, we show the boundedness of both each sources congestion window and the queuing delay at the link; and the global exponential stability under a trivial condition that each sources congestion control parameter a is positive. The simulation results illustrate the validity of the proposed model and the global exponential stability of FAST TCP


international parallel and distributed processing symposium | 2007

Packet Loss Burstiness: Measurements and Implications for Distributed Applications

David X. Wei; Pei Cao; Steven H. Low

Many modern massively distributed systems deploy thousands of nodes to cooperate on a computation task. Network congestions occur in these systems. Most applications rely on congestion control protocols such as TCP to protect the systems from congestion collapse. Most TCP congestion control algorithms use packet loss as signal to detect congestion. In this paper, we study the packet loss process in sub-round-trip-time (sub-RTT) timescale and its impact on the loss-based congestion control algorithms. Our study suggests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects. First, the sub-RTT burstiness in packet loss process leads to complicated interactions between different loss-based algorithms. Second, the sub-RTT burstiness in packet loss process makes the latency of data transfers under TCP hard to predict. Our results suggest that the design of a distributed system has to seriously consider the nature of packet loss process and carefully select the congestion control algorithms best suited for the distributed computation environments.


information theory workshop | 2002

A server allocation and placement algorithm for content distribution

Craig Cameron; Steven H. Low; David X. Wei

It is well known that optimal server placement is NP-hard. We present an approximate model for the case when both clients and servers are dense, and propose a simple server allocation and placement algorithm based on high-rate vector quantization theory. The key idea is to regard the location of a request as a random variable with probability density that is proportional to the demand at that location, and the problem of server placement as source coding, i.e., to optimally map a source value (request location) to a codeword (server location) to minimize distortion (network cost). This view has led to a joint server allocation and placement algorithm that has a time-complexity that is linear in the number of clients. Simulations are presented to illustrate its performance.


conference on decision and control | 2002

High-density model of content distribution network

Craig Cameron; Steven H. Low; David X. Wei

It is well known that optimal server placement is NP-hard. We present an approximate model of a content distribution network for the case when both clients and servers are dense, and propose a simple server allocation and placement algorithm based on high-rate quantization theory. The key idea is to regard the location of a request as a random variable with probability density that is proportional to the demand at that location, and the problem of server placement as source coding, i.e., to optimally map a source value (request location) to a codeword (server location) to minimize distortion (network cost). This view leads to a joint server allocation and placement algorithm that has a time-complexity that is linear in the number of users.

Collaboration


Dive into the David X. Wei's collaboration.

Top Co-Authors

Avatar

Steven H. Low

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cheng Jin

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Craig Cameron

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fernando Paganini

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harvey B Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ao Tang

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hyojeong Choe

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Bunn

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John C. Doyle

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge