Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph A. Bannister is active.

Publication


Featured researches published by Joseph A. Bannister.


international conference on computer communications | 1990

Topological design of the wavelength-division optical network

Joseph A. Bannister; Luigi Fratta; Mario Gerla

The twofold purpose of this research is to develop mathematical programming tools for the optimal or near-optimal design of a new type of network architecture called the wavelength-division optical network (WON) and to discover underlying design principles for such networks via experimentation using these tools. The WON, a multichannel, multihop lightwave network with tunable transceivers, is suitable for use as a metropolitan area network. A unique feature of the WON is that a great number of virtual topologies can be mapped onto a given physical topology. The problems of virtual- and physical-topology design in the WON are described, techniques for their solution are presented, and the interplay between these problems is studied.<<ETX>>


internet measurement conference | 2008

Census and survey of the visible internet

John S. Heidemann; Yuri Pradkin; Ramesh Govindan; Christos Papadopoulos; Genevieve Bartlett; Joseph A. Bannister

Prior measurement studies of the Internet have explored traffic and topology, but have largely ignored edge hosts. While the number of Internet hosts is very large, and many are hidden behind firewalls or in private address space, there is much to be learned from examining the population of visible hosts, those with public unicast addresses that respond to messages. In this paper we introduce two new approaches to explore the visible Internet. Applying statistical population sampling, we use censuses to walk the entire Internet address space, and surveys to probe frequently a fraction of that space. We then use these tools to evaluate address usage, where we find that only 3.6% of allocated addresses are actually occupied by visible hosts, and that occupancy is unevenly distributed, with a quarter of responsive /24 address blocks (subnets) less than 5% full, and only 9% of blocks more than half full. We show about 34 million addresses are very stable and visible to our probes (about 16% of responsive addresses), and we project from this up to 60 million stable Internet-accessible computers. The remainder of allocated addresses are used intermittently, with a median occupancy of 81 minutes. Finally, we show that many firewalls are visible, measuring significant diversity in the distribution of firewalled block size. To our knowledge, we are the first to take a census of edge hosts in the visible Internet since 1982, to evaluate the accuracy of active probing for address census and survey, and to quantify these aspects of the Internet.


Communications of The ACM | 2003

Transport protocols for high performance

Aaron Falk; Theodore Faber; Joseph A. Bannister; Andrew A. Chien; Robert L. Grossman; Jason Leigh

The Explicit Control Protocol and other new congestion-control systems greatly improve application performance over a range of network infrastructure, including extremely high-speed and high-delay links. What then is next for TCP and other established but less-capable Internet protocols?


Journal of Lightwave Technology | 2003

All-optical decrementing of a packet's time-to-live (TTL) field and subsequent dropping of a zero-TTL packet

J.E. McGeehan; Saurabh Kumar; Deniz Gurkan; S.M.R.M. Nezam; Alan E. Willner; K. Parameswaran; M. M. Fejer; Joseph A. Bannister; Joseph D. Touch

We demonstrate an optical time-to-live (TTL) decrementing module for optical packet-switched networks. Our module acts on a standard NRZ-modulated binary TTL field within a 10 Gb/s packet and decrements it by one if the TTL is nonzero. If the TTL of the incoming packet is zero, the module signals an optical switch to drop the packet. Our technique is independent of the TTL length, does not require the use of ultrashort RZ optical pulses, requires no guard time between the end of the TTL field and the packet data, and has only a 2.4 dB power penalty at 10/sup -9/ bit-error rate.


international conference on computer communications | 2004

The need for media access control in optical CDMA networks

Purushotham Kamath; Joseph D. Touch; Joseph A. Bannister

Optical CDMA local area networks allow shared access to a broadcast medium. Every node on the network is assigned an optical orthogonal codeword (OOC) to transmit or receive on. OOCs are designed to be pseudo-orthogonal, i.e., the correlation (and therefore the interference) between pairs of codewords is constrained. This paper demonstrates that the use of optical CDMA does not preclude the need for a media access control (MAC) layer protocol to resolve contention for the shared media. OOCs have low spectral efficiency. As more codewords are transmitted simultaneously, the interference between codewords increases and the network throughput falls. This paper analyzes a network architecture where there is virtually no MAC layer, except for choice of the codeset, and shows that its throughput degrades and collapses under moderate to heavy load. We propose an alternate architecture called interference avoidance where nodes on the network use media access mechanisms to avoid causing interference on the line, thereby improving network throughput. Interference avoidance is analyzed and it is shown that it can provide up to 30% improvement in throughput with low delays and no throughput collapse. We validate our analysis through simulation with realistic network traffic traces.


international conference on communications | 1990

Design of the wavelength-division optical network

Joseph A. Bannister; Mario Gerla

The authors consider the wavelength-division optical network (WON), a generalization of the ShuffleNet concept. Motivated by the potential drawback of large delays in poorly designed WONs, they introduce and use a queuing-network model of the WON that provides a general framework for analyzing the performance of these networks. This performance model is used to formulate the virtual-topology design problem. Using the optimization technique of simulated annealing, a number of instances of the virtual-topology design problem were solved. The results have been very encouraging and have yielded significant-and sometimes dramatic-improvement in delay and throughput, compared to previously proposed, unoptimized design approaches such as ShuffleNet and the Manhattan street network. The authors discuss their experiences with the optimization algorithm and outline continuing efforts in this area.<<ETX>>


international conference on computer communications | 1994

On the design of optical deflection-routing networks

Flaminio Borgonovo; Luigi Fratta; Joseph A. Bannister

Deflection routing plays a prominent role in many optical network architectures, because it can be implemented with modest packet-buffering requirements. From the practical perspective, however, the implementation of deflection routing, which is normally based on global time slotting, might pose challenges. The authors develop approaches to the implementation of both slotted and unslotted deflection-routing optical networks. They analyse important tradeoffs that are inherent to the design of optical deflection-routing networks and compare the performance of slotted and unslotted networks. Under a reasonable set of assumptions about optical technology, the results suggest that the unslotted link protocol should be the preferred approach to building the optical deflection-routing network.<<ETX>>


modeling, analysis, and simulation on computer and telecommunication systems | 2002

Generation of high bandwidth network traffic traces

Purushotham Kamath; Kun Chan Lan; John S. Heidemann; Joseph A. Bannister; Joseph D. Touch

High bandwidth network traffic traces are needed to understand the behavior of high speed networks (such as the Internet backbone). However, the implementation of a mechanism to collect such traces is difficult in practice. In the absence of real traces, tools to generate high bandwidth traces would aid the study of high speed network behavior. We describe three methods of generating high bandwidth network traces: scaling low bandwidth network traffic traces; merging multiple low bandwidth traces; generating traces through simulation by scaling a structural model of real world traces. We evaluate the generated traces and discuss the advantages and disadvantages of each method. We also discuss some of the issues involved in generating traces by the structural model method.


Performance Evaluation | 1992

A versatile model for predicting the performance of deflection-routing networks

Joseph A. Bannister; Flaminio Borgonovo; Luigi Fratta; Mario Gerla

Abstract Deflection routing can be used in networks whose stations have the same number of input and output links. Fixed-length packets arrive synchronously on the stations input links at the beginning output link that offers the shortest path to its destination. Since the number of packet buffers at each output link is finite, the simultaneous contention of two packets for the last buffer of the common output link must be resolved by “deflecting” one of the packets according to a specified criterion (e.g. at random, by destination proximity, or by packet age). Deflection routing can therefore be used with as few as one packet buffer per output link. The potentially unbounded number of routes that a given packet can take makes analyzing the performance of such networks difficult. Using independence assumptions, we develop an efficient, high-fidelity performance model of deflection routing that allows us to estimated the mean end-to-end packet delay in a network that has any given two-connected topology, a single packet buffer at each output port, and an arbitrary traffic matrix.


acm special interest group on data communication | 2005

Experiences with a continuous network tracing infrastructure

Alefiya Hussain; Genevieve Bartlett; Yuri Pryadkin; John S. Heidemann; Christos Papadopoulos; Joseph A. Bannister

One of the most pressing problems in network research is the lack of long-term trace data from ISPs. The Internet carries an enormous volume and variety of data; mining this data can provide valuable insight into the design and development of new protocols and applications. Although capture cards for high-speed links exist today, actually making the network traffic available for analysis involves more than just getting the packets off the wire, but also handling large and variable traffic loads, sanitizing and anonymizing the data, and coordinating access by multiple users. In this paper we discuss the requirements, challenges, and design of an effective traffic monitoring infrastructure for network research. We describe our experience in deploying and maintaining a multi-user system for continuous trace collection at a large regional ISP@. We evaluate the performance of our system and show that it can support sustained collection and processing rates of over 160--300Mbits/s.

Collaboration


Dive into the Joseph A. Bannister's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Gerla

University of California

View shared research outputs
Top Co-Authors

Avatar

Alan E. Willner

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Purushotham Kamath

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

J.E. McGeehan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Suryaputra

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar

Larry A. Bergman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge