Ellen L. Hahne
Bell Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ellen L. Hahne.
IEEE Journal on Selected Areas in Communications | 1991
Ellen L. Hahne
The author studies a simple strategy, proposed independently by E.L. Hahne and R.G. Gallager (1986) and M.G.H. Katevenis (1987), for fairly allocating link capacity in a point-to-point packet network with virtual circuit routing. Each link offers its packet transmission slots to its user sessions by polling them in round-robin order. In addition, window flow control is used to prevent excessive packet queues at the network nodes. As the window size increases, the session throughput rates are shown to approach limits that are perfectly fair in the max-min sense. If each session has periodic input (perhaps with jitter) or has such heavy demand that packets are always waiting to enter the network, then a finite window size suffices to produce perfectly fair throughput rates. The results suggest that the transmission capacity not used by the small window session will be approximately fairly divided among the large window sessions. The focus is on the worst-case performance of round-robin scheduling with windows. >
international conference on computer communications | 1996
Abhijit K. Choudhury; Ellen L. Hahne
Buffer management schemes are needed to fairly regulate the sharing of memory among different output port queues in a shared memory ATM switch. Of the conventional schemes, static threshold is simple but does not adapt to changing traffic conditions while pushout is efficient and adaptive but difficult to implement. We propose a novel scheme called dynamic threshold which combines the simplicity of static threshold and the adaptability of pushout. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more new cells. The dynamic threshold procedure presented improves the fairness and switch efficiency by guaranteeing access to the buffer space for all output queues. Computer simulation is used to compare the loss performance of the dynamic threshold technique with that of static threshold and pushout. The dynamic threshold scheme is shown to be a good compromise: while nearly as simple as static threshold control, it offers most of the performance benefits of pushout. Like pushout, the dynamic threshold method is adaptive, so it is more robust to uncertainties and changes in traffic conditions than, static threshold control.
international conference on computer communications | 1990
Ellen L. Hahne; Abhijit K. Choudhury; Nicholas F. Maxemchuk
The fairness problems suffered by distributed-queue-dual-bus (DQDB) networks that span metropolitan areas are examined in detail. The problems arise because the network control information is subject to propagation delays that are much longer than the transmission time of a data segment. A rate control procedure is proposed that requires only a minor modification of the current DQDB protocol. In order to guarantee that a node acquires only 90% of the available slots, every time it inserts nine data segments into its local queue it inserts one extra request slot into its transmission queue. This lets an extra idle slot go by that was not requested by any downstream node.<<ETX>>
IEEE ACM Transactions on Networking | 1998
Abhijit K. Choudhury; Ellen L. Hahne
In shared-memory packet switches, buffer management schemes can improve overall loss performance, as well as fairness, by regulating the sharing of memory among the different output port queues. Of the conventional schemes, static threshold (ST) is simple but does not adapt to changing traffic conditions, while pushout (PO) is highly adaptive but difficult to implement. We propose a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more arrivals. An analysis of the DT algorithm shows that a small amount of buffer space is (intentionally) left unallocated, and that the remaining buffer space becomes equally distributed among the active output queues. We use computer simulation to compare the loss performance of DT, ST, and PO. DT control is shown to be more robust to uncertainties and changes in traffic conditions than ST control.
international conference on computer communications | 1991
Ellen L. Hahne; Nicholas F. Maxemchuk
Bandwidth balancing is a procedure that gradually achieves a fair allocation of bandwidth among simultaneous file transfers on a distributed-queue dual-bus (DQDB) network. Bandwidth balancing was originally designed for traffic of a single priority level. Three ways are demonstrated to extend this procedure to multi-priority traffic.<<ETX>>
IEEE Transactions on Communications | 1992
Ellen L. Hahne; Abhijit K. Choudhury; Nicholas F. Maxemchuk
It is explained why long distributed queue dual bus (DQDB) networks without bandwidth balancing can have fairness problems when several nodes are performing large file transfers. The problems arise because the network control information is subject to propagation delays that are much longer than the transmission time of a data segment. Bandwidth balancing is then presented as a simple solution. By constraining each node to take only a certain fraction of the transmission opportunities offered to it by the basic DQDB protocol, bandwidth balancing gradually achieves a fair allocation of bandwidth among simultaneous file transfers. Two ways to extend this procedure effectively to multipriority traffic are proposed. >
Journal of Communications and Networks | 1999
Ping Pan; Ellen L. Hahne; Henning Schulzrinne
Resource reservation needs to accommodate the rapidly growing size and increasing service diversity of the Internet. Recently, hierarchical architectures have been proposed that provide domain-level reservation. However, it is not clear that these proposals can set up and maintain reservations in an efficient and scalable fashion. In this paper, we describe a distributed architecture for inter-domain aggregated resource reservation for unicast traffic. We also present an associated protocol, called the Border Gateway Reservation Protocol (BGRP), that scales well, in terms of message processing load, state storage and bandwidth. Each stub or transit domain may use its own intra-domain resource reservation protocol. BGRP builds a sink tree for each of the stub domains. Each sink tree aggregates bandwidth reservations from all data sources in the network. Since backbone routers only maintain the sink tree information, the total number of reservations at each router scales linearly with the number of domains in the Internet. (Even aggregated versions of the current protocol RSVP have an overhead that grows like N.) BGRP relies on Differentiated Services for data forwarding. As a result, the number of packet classifier entries is extremely small. To reduce the protocol message traffic, routers may reserve domain bandwidth beyond the current load, so that sources can join or leave the tree or change their reservation without having to send messages all the way to the tree root for every such change. We use “soft state” to maintain reservations. In contrast to RSVP, refresh messages are delivered reliably, allowing us to reduce the refresh frequency. Columbia University Computer Science Technical Report No. CUCS-029-99
IEEE ACM Transactions on Networking | 1997
Abhijit K. Choudhury; Ellen L. Hahne
We study a multistage hierarchical asynchronous transfer mode (ATM) switch in which each switching element has its own local cell buffer memory that is shared among all its output ports. We propose a novel buffer management technique called delayed pushout that combines a pushout mechanism (for sharing memory efficiently among queues within the same switching element) and a backpressure mechanism (for sharing memory across switch stages). The backpressure component has a threshold to restrict the amount of sharing between stages. A synergy emerges when pushout, backpressure, and this threshold are all employed together. Using a computer simulation of the switch under symmetric but bursty traffic, we study delayed pushout as well as several simpler pushout and backpressure schemes under a wide range of loads. At every load level, we find that the delayed pushout scheme has a lower cell loss rate than its competitors. Finally, we show how delayed pushout can be extended to share buffer space between traffic classes with different space priorities.
Journal of High Speed Networks | 1994
Abhijit K. Choudhury; Ellen L. Hahne
We study an ATM switch architecture in which the queues for all the switch output ports share space flexibly in a common buffer. Using a computer simulation of this switch under bursty traffic, we investigate various ways to manage space priorities in the shared memory. Our findings support one particular strategy which we call “Selective Pushout.” In this scheme, an arriving cell that finds the shared memory full overwrites a cell with priority less than or equal to itself from the longest output queue in the buffer (even if the arriving cell will be joining a different output queue). We simulated Selective Pushout as well as several simpler pushout and threshold schemes under a variety of load conditions. For each load pattern we studied, the Selective Pushout scheme performed at least as well and usually much better than its competitors. Selective Pushout offered a low overall cell loss rate, with very low losses for the high priority cells.
Teletraffic Science and Engineering | 1997
Abhijit K. Choudhury; Ellen L. Hahne
Buffer management schemes are needed in shared-memory ATM switches to regulate the sharing of memory among different output port queues and among traffic classes with different loss priorities. Earlier we proposed a single-priority scheme called Dynamic Threshold, in which the maximum permissible queue length is proportional to the unused buffering in the switch. In this paper, we propose and analyze four different ways of incorporating loss priorities into the Dynamic Threshold scheme. The analysis models sources as deterministic fluids.Output port loads may consist of any mixture of loss priorities, and these loads may vary from port to port. We determine how each scheme allocates buffers among the competing ports and loss priority classes, and we also note how this buffer allocation induces an allocation of bandwidth among the loss priority classes at each port. We find that minor variations in the Dynamic Threshold control law can produce dramatically different resource allocations.