Hans-Werner Braun
San Diego Supercomputer Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans-Werner Braun.
acm special interest group on data communication | 1993
Kimberly C. Claffy; George C. Polyzos; Hans-Werner Braun
The relative performance of different data collection methods in the assessment of various traffic parameters is significant when the amount of data generated by a complete trace of a traffic interval is computationally overwhelming, and even capturing summary statistics for all traffic is impractical. This paper presents a study of the performance of various methods of sampling in answering questions related to wide area network traffic characterization. Using a packet trace from a network environment that aggregates traffic from a large number of sources, we simulate various sampling approaches, including time-driven and event-driven methods, with both random and deterministic selection patterns, at a variety of granularities. Using several metrics which indicate the similarity between two distributions, we then compare the sampled traces to the parent population. Our results revealed that the time-triggered techniques did not perform as well as the packet-triggered ones. Furthermore, the performance differences within each class (packet-based or time-based techniques) are small.
BioScience | 2005
John H. Porter; Peter W. Arzberger; Hans-Werner Braun; Pablo Bryant; Stuart H. Gage; Todd Hansen; Paul J. Hanson; Chau-Chin Lin; Fang-Pang Lin; Timothy K. Kratz; William K. Michener; Sedra Shapiro; Thomas Williams
Abstract Field biologists and ecologists are starting to open new avenues of inquiry at greater spatial and temporal resolution, allowing them to “observe the unobservable” through the use of wireless sensor networks. Sensor networks facilitate the collection of diverse types of data (from temperature to imagery and sound) at frequent intervals—even multiple times per second—over large areas, allowing ecologists and field biologists to engage in intensive and expansive sampling and to unobtrusively collect new types of data. Moreover, real-time data flows allow researchers to react rapidly to events, thus extending the laboratory to the field. We review some existing uses of wireless sensor networks, identify possible areas of application, and review the underlying technologies in the hope of stimulating additional use of this promising technology to address the grand challenges of environmental science.
IEEE Communications Magazine | 2000
Tony McGregor; Hans-Werner Braun; Jeff Brown
The National Laboratory for Applied Network Research is creating a network analysis infrastructure (NAI) to support network research and engineering of high performance research networks. The NAI includes a passive monitoring project, an active monitoring project, and the collection of network management and control data. Together these projects have deployed more than 120 monitors around the high-performance research networks in the United States. This article describes NAI and the projects using it. The article concludes with a discussion of the future plans for the infrastructure.
international world wide web conferences | 1995
Hans-Werner Braun; Kimberly C. Claffy
Abstract We analyze two days of queries to the popular NCSA Mosaic server to assess the geographic distribution of transaction requests. The wide geographic diversity of query sources and popularity of a relatively small portion of the web server file set present a strong case for deployment of geographically distributed caching mechanisms to improve server and network efficiency. The NCSA web server consists of four servers in a cluster. We show time series of bandwidth and transaction demands for the server cluster and break these demands down into components according to geographical source of the query. We analyze the impact of caching the results of queries within the geographic zone from which the request was sourced, in terms of reduction of transactions with and bandwidth volume from the main server. We find that a cache document timeout even as low as 1024 seconds (about 17 minutes) during the two days that we analyzed would have saved between 40% and 70% of the bytes transferred from the central server. We investigate a range of timeouts for flushing documents from the cache, outlining the tradeoff between bandwidth savings and memory/cache management costs. We discuss the implications of this tradeoff in the face of possible future usage-based pricing of backbone services that may connect several cache sites. We also discuss other issues that caching inevitably poses, such as how to redirect queries initially destined for a central server to a preferred cache site. The preference of a cache site may be a function of not only geographic proximity, but also current load on nearby servers or network links. Such refinements in the web architecture will be essential to the stability of the network as the web continues to grow, and operational geographic analysis of queries to archive and library servers will be fundamental to its effective evolution.
international conference on computer communications | 1993
Kimberly C. Claffy; George C. Polyzos; Hans-Werner Braun
The results of a measurement study of the T1 NSFNET backbone are presented. The measurement environment and the approach to data collection are discussed. Measurements results are then presented for: long-term growth in traffic volume, including attribution to domains and protocols; trends in average packet size on the network, over both long- and medium-term intervals; most popular sources, destinations, and site pairs; traffic locality; international distribution of traffic; mean utilization statistics of the overall backbone as well as of specific links of interest: and delay statistics.<<ETX>>
Journal of High Speed Networks | 1994
Roger E. Bohn; Hans-Werner Braun; Kimberly C. Claffy; Stephen S. Wolff
The current architecture and implementation of the Internet assumes a vast aggregation of traffic from many sources and stochastic distribution of traffic both in space (traffic source) and time (burstiness of traffic volume). Given this general assumption, Internet components typically have little if any ability to control the volume and distribution of incoming traffic. As a result the network, particularly from the perspective of the router, is vulnerable to significant consumption of networking resources by high-volume applications, with possibly little stochastic behavior, from a few users. This often impacts the overall profile of network traffic as aggregated from many clients. An example is the continuous flows introduced by real time applications such as packet audio, video, or rapidly changing graphics.This situation creates a time window where applications exist on a network not designed for them, but before an appropriately architected network can augment the current infrastructure and cope with the new type of workload. We propose a scheme for voluntarily setting Internet traffic priorities by end-users and applications, using the existing 3-bit Precedence field in the Internet Protocol header.Our proposal has three elements. First, network routers would queue incoming packets by IP Precedence value instead of the customary single-threaded FIFO. Second, users and their applications would voluntarily use different and appropriate precedence values in their outgoing transmissions according to some defined criteria. Third, network service providers may monitor the precedence levels of traffic entering their network, and use some mechanism such as a quota system to discourage users from setting high precedence values on all their traffic. All three elements can be implemented gradually and selectively across the Internet infrastructure, providing a smooth transition path from the present system. The experience we gain from an implementation will furthermore provide a valuable knowledge base from which to develop sound accounting and billing mechanisms and policies in the future.
Communications of The ACM | 1994
Kimberly C. Claffy; Hans-Werner Braun; George C. Polyzos
We present the architecture for data collection for the NSFNET backbone and diiculties with using the collected statistics for long-term network forecasting of certain traac aspects. We describe relevant aspects of the NSFNET backbone architecture and the instrumentation for statistics collection. We then present long-term NSFNET data to elucidate long-term trends in both the reachability of Internet components via the NSFNET as well as the growing cross-section of traac. We focus on the diiculties of forecasting and planning in an infrastructure whose protocol architecture and instrumentation for data collection was not designed to support such objectives.
acm special interest group on data communication | 2001
Matthew J. Luckie; Anthony James McGregor; Hans-Werner Braun
Packet probing is an important Internet measurement technique, supporting the investigation of packet delay, path, and loss. Current packet probing techniques use Internet Protocols such as the Internet Control Message Protocol (ICMP), the User Datagram Protocol (UDP), and the Transmission Control Protocol (TCP). These protocols were not originally designed for measurement purposes. Current packet probing techniques have several limitations that can be avoided. The IP Measurement Protocol (IPMP) is presented as a protocol that addresses several of the limitations discussed.
acm special interest group on data communication | 1987
David L. Mills; Hans-Werner Braun
The NSFNET Backbone Network interconnects six supercomputer sites, several regional networks and ARPANET. It supports the DARPA Internet protocol suite and DCN subnet protocols, which provide delay-based routing and very accurate time-synchronization services. This paper describes the design and implementation of this network, with special emphasis on robustness issues and congestion-control mechanisms.
international conference on networks | 1993
Hans-Werner Braun; kc claffy; George C. Polyzos
The authors describe steps toward an accounting mechanism to attribute Internet resource consumption based on service quality. Their objective is not to describe a complete accounting and billing system. Rather they advocate taking advantage of existing Internet instrumentation to implement incremental improvements in the short to medium term. Experience from these improvements can allow more educated progress towards a comprehensive Internet accounting system.