Kerry W. Fendick
Bell Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kerry W. Fendick.
IEEE Transactions on Communications | 1989
Kerry W. Fendick; Vikram R. Saksena; Ward Whitt
The burstiness of the total arrival process has been previously characterized in packet network performance models by the dependence among successive interarrival times. It is shown that associated dependence among successive service times and between service times and interarrival times also can be important for packet queues involving variable packet lengths. These dependence effects are demonstrated analytically by considering a multiclass single-server queue with batch-Poisson arrival processes. For this model and more realistic models of packet queues, insight is gained from heavy-traffic limit theorems. This study indicates that all three kinds of dependence should be considered in the analysis and measurement of packet queues involving variables packet lengths. Specific measurements are proposed for real systems and simulations. This study also indicates how to predict expected packet delays under heavy loads. Finally, this study is important for understanding the limitations of procedures such as the queuing network analyzer (QNA) for approximately describing the performance of queuing networks using the techniques of aggregation and decomposition. >
Proceedings of the IEEE | 1989
Kerry W. Fendick; Ward Whitt
Measurements and approximations are proposed to describe the variability of offered traffic to a queue and predict the average workload in the queue. The principal traffic measurement considered is a normalized version of the variance of the total input of work as a function of time, which is called the index of dispersion for work (IDW). Given ample traffic data, the IDW can easily be estimated using sample averages. Given a mathematical model, such as a multiclass queue in which each class has GI/G/1 offered traffic, the IDW can often be calculated analytically, or approximated by using the limits as t approaches 0 and t approaches infinity . The basic premise is that the average workload is primarily determined by the offered traffic, beyond the offered load (the deterministic rate work arrives), through the IDW. Support is provided for this premise, and it is shown how the average workload can be predicted from the IDW or basic model parameters. >
acm special interest group on data communication | 1992
Kerry W. Fendick; Manoel A. Rodrigues; Alan Weiss
In this paper we analyze a class of delayed feedback schemes that achieves the dual goal of keeping buffers small and utilizations high, despite propagation delays and regardless of network rates. We analyze delayed feedback schemes as a system of delay-differential equations, in which we model the queue-length process and the rate at which a source transmits data as fluids. We assume that a stream of acknowledgements carries information about the state of a bottleneck queue back to the source, which adapts its transmission rate according to any monotone function of that state. We show stability for this class of schemes, in that their rate of transmission and queue length rapidly converge to a small neighborhood of the designed operating point. We identify the appropriate scaling of the models parameters for the system to perform optimally.
Performance Evaluation | 1992
Kerry W. Fendick; Manoel A. Rodrigues; Alan Weiss
Abstract Digital communication has become fast enough so that the speed of light has become a bottleneck. For example, the round trip transcontinental [USA] delay through a fiber link is approximately 0.04 s; at 150 Megabit/s, a source needs to transmit approximately 8,000,000 bits during one round trip time to utilize the bandwidth fully. As the service rates of queues get large, the time scales of congestion in those queues decrease relative to the round trip time, making the dual goals of keeping buffers small and utilizations high even more difficult to achieve. In this paper we analyze a class of delayed feedback schemes that achieve these goals despite propagation delays and regardless of network rates. We analyze the delayed feedback schemes as a system of delay-differential equations, in which we model the queue-length process and the rate at which a source transmits data as fluids. We assume that a stream of acknowledgements carries information about the state of a bottleneck queue back to the source, which adapts its transmission rate according to any monotone function of that state. We show stability for this class of schemes, in that their rate of transmission and queue length rapidly converge to a small neighborhood of the designed operating point. We identify the appropriate scaling of the models parameters, as a function of network speed, for the system to perform optimally: with a deterministic service rate of μ at the bottleneck queue, the steady state utilization of the queue is 100− O (μ − 1 2 )% and steady state delay is O (μ − 1 2 ) . We also describe the transient of behavior of the system as another source suddenly starts competing for the bandwidth resources at the bottleneck queue. This work directly applies to the adaptive control of Frame Relay and ATM networks, both of which provide feedback to users on congestion.
IEEE Transactions on Communications | 1991
Kerry W. Fendick; Vikram R. Saksena; Ward Whitt
The authors continue an investigation of the way diverse traffic from different data applications affects the performance of packet queues. This traffic often exhibits significant dependence among successive interarrival times, among successive service times, and between interarrival times and service times, which can cause a significant degradation of performance under heavy loads (and often even under moderate loads). This dependence and its effects on performance (specifically, the mean steady-state workload) are partially characterized here by the cumulative correlations in the total input process of work, which is referred to as the index of dispersion for work (IDW). The authors evaluate approximations for the mean steady-state workload based on the IDW by making comparisons with computer simulations. >
IEEE Transactions on Information Theory | 1994
Kerry W. Fendick; Manoel A. Rodrigues
This paper analyzes the effectiveness of a class of adaptive algorithms for rate control in a data network with the following two elements: many sources with diverse characteristics (e.g., nonadaptive and adaptive sources with different feedback delays, different constraints on transmission rates) and a switch, based on ATM or cell-relay technology, with finite buffers. Several adaptive sources compete among themselves as well as with other nonadaptive sources for bandwidth at a single queue. We first model random fluctuations in the queue-length process due to the nonadaptive sources as Brownian motion, and we show, for a large class of adaptive strategies, how the amount of bandwidth wasted because of idleness and the amount of offered traffic lost because of overflowing buffers scale with the speed of the network. We then model the arrival process of nonadaptive traffic more realistically as a general stochastic fluid with bounded, positive rates. For a class of adaptive strategies with linear adaptation functions, we prove that the results obtained from the Brownian model of randomness extend to cover the more realistic model. This occurs because the adaptive sources induce heavy-traffic conditions (corresponding to the power-maximizing regime of Mitra (1990)) by accurately estimating and using the residual bandwidth not occupied by the nonadaptive traffic. Our analysis gives new insight about how performance measures scale with the variability of the nonadaptive traffic. We illustrate through simulations that queue fluctuations behave as predicted. >
international conference on communications | 1992
Kerry W. Fendick; Manoel A. Rodrigues
As network speeds increase and the data traffic becomes more diverse, the need arises for service disciplines that offer fair treatment to diverse applications, while efficiently using resources at high speeds. Disciplines that approximate round-robin or processor-sharing service per channel are well suited for data networks because, over a wide range of time scales, they allocate bandwidth fairly among channels without needing to distinguish between different types of applications. This study is among the few to address head-of-line processor sharing. In most previous models of processor-sharing disciplines, the system immediately serves any arriving message at a rate depending only on the number of messages in the system regardless of how these messages are distributed among the channels. This model is commonly called pure processor sharing. In our model, the server completes the work from a given channel at a rate depending on the number of other channels with work in the system. That is, the service rate depends on how messages are distributed among the channels, and only indirectly on the total number of messages in the system. In this paper, we contrast the buffer requirements of shared and non-shared buffer schemes, when both types of schemes provide head-of-the-line processor-sharing service among channels. We formulate the problem as a system of functions representing the cumulative input and cumulative lost (potential) output to parts of the queueing system and model the vector of input functions as a multi-dimensional Brownian motion. The resulting heavy-traffic approximations predict much larger benefits from sharing buffers than those predicted by pure processor sharing.
International Journal of Stochastic Analysis | 1998
Kerry W. Fendick; Ward Whitt
In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster) sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM) approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to determine whether the presence of losses should significantly affect the estimation of server utilization when both losses and utilizations are estimated from data. Overall, our analysis can serve as a basis for determining required observation intervals in order to reach conclusions with a solid statistical basis. Thus our analysis can help plan simulations as well as computer system measurements.
acm special interest group on data communication | 1993
Kerry W. Fendick; Manoel A. Rodrigues
Emerging technologies for high-speed data network provide users with explicit feedback about congestion. Users, however, have questioned whether feedback can help in avoiding congestion in wide-area networks that operate at speeds of multiple megabits- or gigabits-per-second. In this paper we analyze a class of adaptive schemes with delayed feedback, where adaptive sources interact with each other as well as with non-adaptive sources through a single queue. We model the network as a stochastic, delay-differential equation and the rate at which an adaptive source transmits data as a fluid. Our model accounts for different levels of traffic burstiness introduced by the non-adaptive sources and for distinct propagation delays for different adaptive sources. We show how the performance of the system scales with increasing network speeds for quite general fluctuations in the bandwidth available to the adaptive sources. Simulation results show the accuracy of the models predictions as a function of key parameters. This paper should raise the expectations of users about the potential effectiveness of responding to congestion notification.
international conference on communications | 1991
Kerry W. Fendick; P. Harshavardhana; S. Jidarian
A set of models developed to do traffic engineering, i.e., routing, performance analysis, and service problem relief, for data networks that use AT&Ts Datakit virtual circuit switch is described. Software tools have been developed based on these methods and integrated into a comprehensive traffic engineering system called traffic administration and management system (TAMS). In addition to the methods, how the capabilities of this traffic engineering system will fit into various key functional areas of network management, such as performance management and network planning, is described.<<ETX>>