Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert L. Carter is active.

Publication


Featured researches published by Robert L. Carter.


Performance Evaluation | 1996

Measuring bottleneck link speed in packet-switched networks

Robert L. Carter; Mark Crovella

Abstract The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.


international conference on computer communications | 1997

Server selection using dynamic path characterization in wide-area networks

Robert L. Carter; Mark Crovella

Replication is a commonly proposed solution to problems of scale associated with distributed services. However, when a service is replicated, each client must be assigned a server. Prior work has generally assumed the assignment to be static. In contrast, we propose a dynamic server selection, and show that it enables application-level congestion avoidance. Using tools to measure the available bandwidth and round trip latency (RTT), we demonstrate the dynamic server selection and compare it to previous static approaches. We show that because of the variability of paths in the Internet, dynamic server selection consistently outperforms static policies, reducing response times by as much as 50%. However, we also must adopt a systems perspective and consider the impact of the measurement method on the network. Therefore, we look at alternative low-cost approximations and find that the careful measurements provided by our tools can be closely approximated by much lighter-weight measurements. We propose a protocol using this method which is limited to at most a 1% increase in network traffic but which often costs much less in practice.


Second International Workshop on Services in Distributed and Networked Environments | 1995

Application-level document caching in the Internet

Azer Bestavros; Robert L. Carter; Mark Crovella; Carlos Rompante Cunha; Abdelsalam Heddaya; Sulaiman A. Mirdad

With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes muck more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.<<ETX>>


Computer Networks | 1999

On the network impact of dynamic server selection

Robert L. Carter; Mark Crovella

Abstract Widespread replication of information can ameliorate the problem of server overloading but raises the allied question of server selection. Clients may be assigned to a replica in a static manner or they may choose among replicas based on client-initiated measurements. The latter technique, called dynamic server selection (DSS), can provide significantly improved response time to users when compared with static server assignment policies (for example, based on network distance in hops). In the first part of this paper we demonstrate the idea of DSS using experiments performed in the Internet. We compare a range of policies for DSS and show that obtaining additional information about servers and paths in the Internet before choosing a server improves response time significantly. The best policy we examine adopts a strategy of never adding more than 1% additional traffic to the network, and is still able to provide nearly all the benefits of the most expensive policies. While these results suggest that DSS is beneficial from the network users standpoint, the system-wide effects of DSS schemes should also be closely examined. In the second part of this paper we use large-scale simulation to study the system-wide network impact of dynamic server selection. We use a simulated network of over 100 hosts that allows local-area effects to be distinguished from wide-area effects within traffic patterns. In this environment we compare DSS with static server selection schemes and confirm that client benefits remain even when many use DSS simultaneously. Importantly, we also show that DSS confers system-wide benefits from the network standpoint, as compared to static server selection. First, overall data traffic volume in the network is reduced, since DSS tends to diminish network congestion. Second, traffic distribution improves – traffic is shifted from the backbone to regional and local networks.


electronic commerce | 2000

MultECommerce: a distributed architecture for collaborative shopping on the WWW

Stefano Puglia; Robert L. Carter; Ravi Jain

The WWW has made information and services more available than ever before. Many of the first Web applications have been emulations of real world activities, in particular, e-commerce. But so far, the use of information and services on the Web has been a solitary one. We propose a component-based architecture for collaboration that provides shared navigation of the WWW along with an EJB-based server implementation. As a particular application built on this architecture, we present MultECommerce, through which multiple users can participate in virtual shopping trips among multiple shopping sites. MultECommerce features a multi-site shopping cart and enables one-stop checkout from all visited shopping sites. We examine security and performance issues of our architecture. Categories and Subject Descriptors


Distributed and Parallel Databases | 2001

Client-Server Caching with Expiration Timestamps

Tamra Carpenter; Robert L. Carter; Munir Cochinwala; Martin I. Eiger

We study client-server caching of data with expiration timestamps. Although motivated by the potential for caching in telecommunication applications, our work extends to the general case of caching data that has known expiration times. Toward this end, we tailor caching algorithms to consider expiration timestamps. Next, we consider several different client-server paradigms that differ in whether and how the server updates client caches. Finally, we perform simulation studies to evaluate the empirical performance of a variety of strategies for managing a single cache independent of the server and for managing caches in a client-server setting.


international database engineering and applications symposium | 2000

Data caching for telephony services

Tamra Carpenter; Robert L. Carter; Munir Cochinwala; Martin I. Eiger

We study client-server data caching, both with and without expiration timestamps, to assess its applicability to present and future telecommunications services such as local number portability, toll-free numbers, and mobile telephony. We perform simulation studies to evaluate the empirical performance of a variety of strategies for caching in a client-server setting. Data caching at client locations is found to be an economical and scalable approach to support future data-intensive telecommunications services.


advances in databases and information systems | 2000

Caching for Mobile Communication

Tamra Carpenter; Robert L. Carter; Munir Cochinwala; Martin I. Eiger

We study caching as a means to reduce the message traffic and database accesses required for locating called subscribers in a Personal Communication Services (PCS) network. The challenge of caching routing information for mobile clients lies in the uncertainty of the length of time that the information remains valid.We use expiration timestamps to safeguard against using stale cached data. We study a variety of caching algorithms and a variety of methods for setting timestamps based on client mobility. We report results from simulation studies.


international conference on computer communications | 1996

Dynamic Server Selection using Bandwidth Probing in Wide-Area Networks

Robert L. Carter; Mark Crovella


Third IEEE Workshop on the Architecture and Implementation of High Performance Communication Subsystems | 1995

Dynamic Server Selection In The Internet

Mark Crovella; Robert L. Carter

Collaboration


Dive into the Robert L. Carter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge