Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey C. Mogul is active.

Publication


Featured researches published by Jeffrey C. Mogul.


acm special interest group on data communication | 1996

Using predictive prefetching to improve World Wide Web latency

Venkata N. Padmanabhan; Jeffrey C. Mogul

The long-term success of the World Wide Web depends on fast response time. People use the Web to access information from remote sites, but do not like to wait long for their results. The latency of retrieving a Web document depends on several factors such as the network bandwidth, propagation time and the speed of the server and client computers. Although several proposals have been made for reducing this latency, it is difficult to push it to the point where it becomes insignificant.This motivates our work, where we investigate a scheme for reducing the latency perceived by users by predicting and prefetching files that are likely to be requested soon, while the user is browsing through the currently displayed page. In our scheme the server, which gets to see requests from several clients, makes predictions while individual clients initiate prefetching. We evaluate our scheme based on trace-driven simulations of prefetching over both high-bandwidth and low-bandwidth links. Our results indicate that prefetching is quite beneficial in both cases, resulting in a significant reduction in the average access time at the cost of an increase in network traffic by a similar fraction. We expect prefetching to be particularly profitable over non-shared (dialup) links and high-bandwidth, high-latency (satellite) links.


ACM Transactions on Computer Systems | 1997

Eliminating receive livelock in an interrupt-driven kernel

Jeffrey C. Mogul; K. K. Ramakrishnan

Most operating systems use interface interrupts to schedule network tasks. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms ofreceive livelock, in which the system spends all of its time processing interrupts, to the exclusion of other necessary tasks. Under extreme conditions, no packets are delivered to the user application or the output of the system. To avoid livelock and related problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. We modified an interrupt-driven networking implementation to do so; this modification eliminates receive livelock without degrading other aspects of system performance. Our modifications include the use of polling when the system is heavily loaded, while retaining the use of interrupts ur.Jer lighter load. We present measurements demonstrating the success of our approach.


acm special interest group on data communication | 1995

The case for persistent-connection HTTP

Jeffrey C. Mogul

The success of the World-Wide Web is largely due to the simplicity, hence ease of implementation, of the Hypertext Transfer Protocol (HTTP). HTTP, however, makes inefficient use of network and server resources, and adds unnecessary latencies, by creating a new TCP connection for each request. Modifications to HTTP have been proposed that would transport multiple requests over each TCP connection. These modifications have led to debate over their actual impact on users, on servers, and on the network. This paper reports the results of log-driven simulations of several variants of the proposed modifications, which demonstrate the value of persistent connections.


acm special interest group on data communication | 1987

Fragmentation considered harmful

Christopher A. Kent; Jeffrey C. Mogul

Internetworks can be built from many different kinds of networks, with varying limits on maximum packet size. Throughput is usually maximized when the largest possible packet is sent; unfortunately, some routes can carry only very small packets. The IP protocol allows a gateway to fragment a packet if it is too large to be transmitted. Fragmentation is at best a necessary evil; it can lead to poor performance or complete communication failure. There are a variety of ways to reduce the likelihood of fragmentation; some can be incorporated into existing IP implementations without changes in protocol specifications. Others require new protocols, or modifications to existing protocols.


international world wide web conferences | 1995

Improving HTTP latency

Venkata N. Padmanabhan; Jeffrey C. Mogul

The HTTP protocol, as currently used in the World Wide Web, uses a separate TCP connection for each file requested. This adds significant and unnecessary overhead, especially in the number of network round trips required. We analyze the costs of this approach and propose simple modifications to HTTP that, while interoperating with unmodified implementations, avoid the unnecessary network costs. We implemented our modifications, and our measurements show that they dramatically reduce latencies.


workshop on hot topics in operating systems | 1995

Operating systems support for busy Internet servers

Jeffrey C. Mogul

The Internet has experienced exponential growth in the use of the World-Wide Web, and rapid growth in the use of other Internet services such as USENET news and electronic mail. These applications qualitatively differ from other network applications in the stresses they impose on busy server systems. Unlike traditional distributed systems, Internet servers must cope with huge user communities, short interactions, and long network latencies. Such servers require different kinds of operating system features to manage their resources effectively.


ACM Transactions on Computer Systems | 1992

Network locality at the scale of processes

Jeffrey C. Mogul

Packets on a LAN can be viewed as a series of references to and from the objects they address. The amount of locality in this reference stream may be critical to the efficiency of network implementations, if the locality can be exploited through caching or scheduling mechanisms. Most previous studies have treated network locality with an addressing granularity of networks or individual hosts. This paper describes some experiments tracing locality at a finer grain, looking at references to individual processes, and with fine-grained time resolution. Observations of typical LANs show high per-process locality; that is, packets to a host usually arrive for the process that most recently sent a packet, and often with little intervening delay.


Software - Practice and Experience | 2004

Clarifying the fundamentals of HTTP

Jeffrey C. Mogul

The simplicity of HTTP was a major factor in the success of the Web. However, as both the protocol and its uses have evolved, HTTP has grown complex. This complexity results in numerous problems, including confused implementors, interoperability failures, difficulty in extending the protocol, and a long specification without much documented rationale.


acm conference on hypertext | 1997

Hypertext Transfer Protocol -- HTTP/1.1

Roy Fielding; James Gettys; Jeffrey C. Mogul; H. Frystyk; Larry Masinter; Paul J. Leach; Tim Berners-Lee


RFC1191 | 1990

Path MTU discovery

Jeffrey C. Mogul; Stephen Deering

Collaboration


Dive into the Jeffrey C. Mogul's collaboration.

Top Co-Authors

Avatar

Roy Fielding

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Berners-Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henrik Nielsen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas M. Kroeger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Anja Feldmann

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge