Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William R. Wing is active.

Publication


Featured researches published by William R. Wing.


IEEE Communications Magazine | 2005

Ultrascience net: network testbed for large-scale science applications

Nageswara S. V. Rao; William R. Wing; Steven M. Carter; Qishi Wu

UltraScienceNet is an experimental wide area network testbed to enable the development of networking technologies required for next-generation large-scale scientific applications. It provides on-demand dedicated high-bandwidth channels for large data transfers, and also high-resolution high-precision channels for fine control operations. In the initial deployment its data plane consists of several thousand miles of dual 10 Gb/s lambdas. The channels are provisioned on demand using layer 1 and 2 switches in the backbone and multiple service provisioning platforms at the edges in a flexible configuration using a secure control plane. A centralized scheduler is employed to compute future channel allocations, and a signaling daemon is used to generate the configuration signals to switches at appropriate times. The control plane is implemented using an out-of-band virtual private network, which encrypts the switching signals, and also provides authenticated user and application access. Transport experiments are conducted on a smaller test connection that provides us useful information about the basic properties and issues of utilizing dedicated channels in applications.


ieee international conference computer and communications | 2006

Control Plane for Advance Bandwidth Scheduling in Ultra High-Speed Networks

Nageswara S. V. Rao; Qishi Wu; Song Ding; Steven M. Carter; William R. Wing; Amitabha Banerjee; Dipak Ghosal; Biswanath Mukherjee

A control-plane architecture for supporting advance reservation of dedicated bandwidth channels on a switched network infrastructure is described including the front-end web interface, user and token management scheme, bandwidth scheduler, and signaling daemon. A path computation algorithm for bandwidth scheduling is proposed based on an extension of Bellman-Ford algorithm to an algebraic structure on sequences of disjoint non-negative real intervals. An implementation of this architecture for UltraScience Net is briefly described.


ieee international conference on high performance computing data and analytics | 2008

Wide-area performance profiling of 10GigE and InfiniBand technologies

Nageswara S. V. Rao; Weikuan Yu; William R. Wing; Stephen W. Poole; Jeffrey S. Vetter

For wide-area high-performance applications, light-paths provide 10Gbps connectivity, and multi-core hosts with PCI-Express can drive such data rates. However, sustaining such end-to-end application throughputs across connections of thousands of miles remains challenging, and the current performance studies of such solutions are very limited. We present an experimental study of two solutions to achieve such throughputs based on: (a) 10Gbps Ethernet with TCP/IP transport protocols, and (b) InfiniBand and its wide-area extensions. For both, we generate performance profiles over 10Gbps connections of lengths up to 8600 miles, and discuss the components, complexity, and limitations of sustaining such throughputs, using different connections and host configurations. Our results indicate that IB solution is better suited for applications with a single large flow, and 10GigE solution is better for those with multiple competing flows.


Annales Des Télécommunications | 2006

High-speed dedicated channels and experimental results with Hurricane protocol

Nageswara S. V. Rao; Qishi Wu; Steven M. Carter; William R. Wing

Networks are currently being deployed to provide dedicated channels to support large data transfers and stable control flow needed in large-scale scientific applications. We present experimental results on application-level throughputs achievable on such channels using a range of hosts and dedicated connections. These results high-light the throughput limitations in several cases due to host issues, including disk and file system speeds, processor scheduling and loads, and complexity of internal data paths. We characterize such effects using the notion of host-bandwidth, which must be considered together with the connection-bandwidth in designing and optimizing transport protocols for dedicated channels. We propose a new transport protocol implementation, named Hurricane, to achieve high utilization of dedicated channels. While the overall protocol is quite similar to existing UDP-based protocols, new parameters, such as group size of NACKS, are identified and carefully optimized to achieve high channel utilization. Our end hosts consist of workstations, a cluster and Cray X1 supercomputer. Between two workstations, we consider: (A) 1 Gbps layer 3 connection of several hundred miles, and (B) 10 Gbps layer 2 connection of several thousand miles. Between Cray X1 and the cluster, we consider: (C) 450 Mbps layer 3 channel provisioned by policy, and (D) 1 Gbps layer 2 connection provisioned over an mpls tunnel.RésuméDans le cadre des applications scientifiques à grande échelle, des réseaux sont en cours de déploiement pour fournir les canaux dédiés nécessaires aux transferts de grandes quantités de données et à la mise en place de flux de contrôle stables. Nous présentons des résultats expérimentaux sur les débits de transfert au niveau applicatif que l’on peut atteindre avec de telles liaisons en utilisant toute une gamme d’ordinateurs hôtes et de connexions spécialisées. Ces résultats soulignent les limitations de débit provenant des caractéristiques des hôtes, y compris la vitesse des disques et du système de fichiers, l’ordonnancement et la charge des processeurs et la complexité des chemins de données internes. Nous caractérisons ces effets en utilisant la notion de débit de l’hôte, que l’on doit analyser en liaison avec le débit de la connexion, dans la conception et l’optimisation de protocoles de transport pour canaux dédiés. Nous proposons une impl’ementation d’un nouveau protocole de transport, nommé Hurricane, dans le but d’atteindre un taux d’utilisation élevé sur ces canaux dédiés. Si le protocole ressemble globalement aux protocoles existants basés sur UDP, de nouveaux paramètres, comme la taille d’un groupe d’acquittements négatifs (nack) sont identifiés et soigneusement optimisés. Nos hôtes terminaux sont des stations de travail, une grappe de machines et un superordinateur Cray XL Entre deux stations de travail, nous considérons une connexion de niveau 3 à 1 Gbit/s de plusieurs centaines de kilomètres ainsi qu’une connexion de niveau 2 à 10 Gbit/s de plusieurs milliers de kilomètres. Entre le Cray X1 et la grappe, nous considérons un canal réservé de niveau 3 à 450 Mbit/s et une connexion de niveau 2 à 1 Gbit/s à travers un tunnel mpls.


Journal of Physics: Conference Series | 2005

Networking for large-scale science: infrastructure, provisioning, transport and application mapping

Nageswara S. V. Rao; Steven M. Carter; QishiWu; William R. Wing; Mengxia Zhu; Anthony Mezzacappa; Malathi Veeraraghavan; John M. Blondin

Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.


international conference on computer communications | 2009

Experimental Analysis of Flow Optimization and Data Compression for TCP Enhancement

Nageswara S. V. Rao; Stephen W. Poole; William R. Wing

Flow optimization and data compression methods promise to improve TCP performance, and edge devices that implement them to transparently improve wide-area network performance are currently being developed. We present an experimental study of TCP throughput performance of such Cisco devices using 1Gbps connections of thousands of miles over UltraScience Net. Based on iperf measurements, we have the following observations: (i) multi-fold throughput improvements are achieved over the buffer-tuned TCP both for single and most multiple streams; and (ii) high throughputs are maintained over connection lengths of thousands of miles. For file transfers using iperf, our experiments included files with repeated bytes and uniformly randomly generated bytes, and supernova simulation data in hdf format: (i) highest and lowest throughputs are achieved for hdf and random data files, respectively; (ii) most throughputs were maximized by 5-10 parallel TCP streams; and (iii) pre-compression of files using gzip did not have a significant effect on transport performance.


International Journal of Distributed Sensor Networks | 2009

UltraScience Net: High-Performance Network Research Test-Bed

Nageswara S. V. Rao; William R. Wing; Susan E. Hicks; Stephen W. Poole; Frank A DeNap; Steven M. Carter; Qishi Wu

The high-performance networking requirements for next generation large-scale applications belong to two broad classes: a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USNs data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating that the performance of switched connections and Multiple Protocol Label Switching tunnels over routed networks are comparable. Furthermore, such connections can be easily peered, and the performance of the resultant hybrid connections is still comparable to the constituent pure connections. USN experiments demonstrated that Infiniband transport can be effectively extended to wide-area connections of thousands of miles, which opens up new opportunities for efficient bulk data transport. USN provided dedicated connections to Cray X1 supercomputer and helped diagnose TCP performance problems which might have been otherwise incorrectly attributed to traffic on shared connections. USN contributed to the development of transport methods for dedicated connections to other traffic. Recently, experiments were conducted to assess the performance of application acceleration devices that employ flow optimization and data compression methods to improve TCP performance.


Other Information: PBD: 20 Apr 2001 | 2001

Building and measuring a high performance network architecture

William Kramer; Timothy Toole; Chuck Fisher; Jon Dugan; David R. Wheeler; William R. Wing; William Nickless; Gregory Goddard; Steven Corbato; E. Paul Love; Paul Daspit; Hal Edwards; Linden Mercer; David Koester; Basil Decina; Eli Dart; Paul Reisinger; Riki Kurihara; Matthew J. Zekauskas; Eric Plesset; Julie Wulf; Douglas Luce; James Rogers; Rex Duncan; Jeffery Mauth

Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.


2007 High-Speed Networks Workshop | 2007

Measurements On Hybrid Dedicated Bandwidth Connections

Nageswara S. V. Rao; William R. Wing; Qishi Wu; Nasir Ghani; Qing Liu; Tom Lehman; Chin Guok; Eli Dart

To meet the data transport demands of large-scale applications, several research and production networks now offer dedicated connections between client subnets or hosts. Such dedicated connections can be provisioned using two fundamentally different technologies: (i) SONET or Ethernet connections over switched networks; and (ii) MPLS tunnels over routed networks. Since these two options represent significantly different cost-benefit trade-offs, performance comparison between the connections provisioned using them is essential to making deployment decisions. We compare 1 Gbps dedicated connections with lengths up to several thousand miles over UltraScience Net and ESnet, wherein the dedicated connections are implemented as SONET connections and MPLS tunnels, respectively. In terms of bandwidth measurements, throughput profiles, file transfer rates and message delays, both types of connections offer comparable performances. Furthermore, these performance parameters are preserved when hybrid connections are composed by concatenating SONET connections and MPLS tunnels using VLANs implemented on them.


optical fiber communication conference | 2007

GRID and Optical Networks: How to Bridge the Gap

Nageswara S. V. Rao; Qishi Wu; Steven M. Carter; William R. Wing

By utilizing high-performance networks and end-systems, we show the challenges and approaches to making the underlying networking capabilities fully available to applications through impedance matching of the entire application-middleware-network execution and data paths.

Collaboration


Dive into the William R. Wing's collaboration.

Top Co-Authors

Avatar

Nageswara S. V. Rao

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Qishi Wu

University of Memphis

View shared research outputs
Top Co-Authors

Avatar

Steven M. Carter

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stephen W. Poole

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eli Dart

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chin Guok

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

D. E. Greenwood

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge