Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jim Challenger is active.

Publication


Featured researches published by Jim Challenger.


international conference on computer communications | 1999

A scalable system for consistently caching dynamic Web data

Jim Challenger; Arun Iyengar; Paul M. Dantzig

This paper presents a new approach for consistently caching dynamic Web data in order to improve performance. Our algorithm, which we call data update propagation (DUP), maintains data dependence information between cached objects and the underlying data which affect their values in a graph. When the system becomes aware of a change to underlying data, graph traversal algorithms are applied to determine which cached objects are affected by the change. Cached objects which are found to be highly obsolete are then either invalidated or updated. The DUP was a critical component at the official Web site for the 1998 Olympic Winter Games. By using DUP, we were able to achieve cache hit rates close to 100% compared with 80% for an earlier version of our system which did not employ DUP. As a result of the high cache hit rates, the Olympic Games Web site was able to serve data quickly even during peak request periods.


international conference on computer communications | 2000

A publishing system for efficiently creating dynamic Web content

Jim Challenger; Arun Iyengar; Karen Witting; Cameron Ferstat; Paul Reed

This paper presents a publishing system for efficiently creating dynamic Web content. Complex Web pages are constructed from simpler fragments. Fragments may recursively embed other fragments. Relationships between Web pages and fragments are represented by object dependence graphs. We present algorithms for efficiently detecting and updating Web pages affected after one or more fragments change. We also present algorithms for publishing sets of Web pages consistently; different algorithms are used depending upon the consistency requirements. Our publishing system provides an easy method for Web site designers to specify and modify inclusion relationships among Web pages and fragments. Users can update content on multiple Web pages by modifying a template. The system then automatically updates an Web pages affected by the change. Our system accommodates both content that must be proof-read before publication and is typically from humans as well as content that has to be published immediately and is typically from automated feeds. Our system is deployed at several popular Web sites including the 2000 Olympic Games Web site. We discuss some of our experiences with real deployments of our system as well as its performance.


IEEE Internet Computing | 2000

High performance Web site design techniques

Arun Iyengar; Jim Challenger; Daniel M. Dias; Paul M. Dantzig

This article presents techniques for designing Web sites that need to handle large request volumes and provide high availability. The authors present new techniques they developed for keeping cached dynamic data current and synchronizing caches with underlying databases. Many of these techniques were deployed at the official Web site for the 1998 Olympic Winter Games.


conference on high performance computing (supercomputing) | 1998

A Scalable and Highly Available System for Serving Dynamic Data at Frequently Accessed Web Sites

Jim Challenger; Paul M. Dantzig; Arun Iyengar

This paper describes the system and key techniques used for achieving performance and high availability at the official Web site for the 1998 Olympic Winter Games which was one of the most popular Web sites for the duration of the Olympic Games. The Web site utilized thirteen SP2 systems scattered around the globe containing a total of 143 processors. A key feature of the Web site was that the data being presented to clients was constantly changing. Whenever new results were entered into the system, updated Web pages reflecting the changes were made available to the rest of the world within seconds. One technique we used to serve dynamic data efficiently to clients was to cache dynamic pages so that they only had to be generated once. We developed and implemented a new algorithm we call Data Update Propagation (DUP) which identifies the cached pages that have become stale as a result of changes to underlying data on which the cached pages depend, such as databases. For the Olympic Games Web site, we were able to update stale pages directly in the cache which obviated the need to invalidate them. This allowed us to achieve cache hit rates of close to 100%. Our system was able to serve pages to clients quickly during the entire Olympic Games even during peak periods. In addition, the site was available 100% of the time. We describe the keyfeatures employed by our site for high availability. We also describe how the Web site was structured to provide useful information while requiring clients to examine only a small number of pages.


international conference on autonomic computing | 2007

Towards Autonomic Fault Recovery in System-S

Gabriela Jacques-Silva; Jim Challenger; Lou Degenaro; James R. Giles; Rohit Wagle

System-S is a stream processing infrastructure which enables program fragments to be distributed and connected to form complex applications. There may be potentially tens of thousands of interdependent and heterogeneous program fragments running across thousands of nodes. While the scale and interconnection imply the need for automation to manage the program fragments, the need is intensified because the applications operate on live streaming data and thus need to be highly available. System-S has been designed with components that autonomically manage the program fragments, but the system components themselves are also susceptible to failures which can jeopardize the system and its applications. The work we present addresses the self healing nature of these management components in System-S. In particular, we show how one key component of System-S, the job management orchestrator, can be abruptly terminated and then recover without interrupting any of the running program fragments by reconciling with other autonomous system components. We also describe techniques that we have developed to validate that the system is able to autonomically respond to a wide variety of error conditions including the abrupt termination and recovery of key system components. Finally, we show the performance of the job management orchestrator recovery for a variety of workloads.


ACM Transactions on Internet Technology | 2005

A fragment-based approach for efficiently creating dynamic web content

Jim Challenger; Paul M. Dantzig; Arun Iyengar; Karen Witting

This article presents a publishing system for efficiently creating dynamic Web content. Complex Web pages are constructed from simpler fragments. Fragments may recursively embed other fragments. Relationships between Web pages and fragments are represented by object dependence graphs. We present algorithms for efficiently detecting and updating Web pages affected after one or more fragments change. We also present algorithms for publishing sets of Web pages consistently; different algorithms are used depending upon the consistency requirements.Our publishing system provides an easy method for Web site designers to specify and modify inclusion relationships among Web pages and fragments. Users can update content on multiple Web pages by modifying a template. The system then automatically updates all Web pages affected by the change. Our system accommodates both content that must be proofread before publication and is typically from humans as well as content that has to be published immediately and is typically from automated feeds.We discuss some of our experiences with real deployments of our system as well as its performance. We also quantitatively present characteristics of fragments used at a major deployment of our publishing system including fragment sizes, update frequencies, and inclusion relationships.


Lecture Notes in Computer Science | 2001

Engineering Highly Accessed Web Sites for Performance

Jim Challenger; Arun Iyengar; Paul M. Dantzig; Daniel M. Dias; Nathaniel Mills

This paper describes techniques for improving performance at Web sites which receive significant traffic. Poor performance can be caused by dynamic data, insufficient network bandwidth, and poor Web page design. Dynamic data overheads can often be reduced by caching dynamic pages and using fast interfaces to invoke server programs. Web server acceleration can significantly improve performance and reduce the hardware needed at a Web site. We discuss techniques for balancing load among multiple servers at a Web site. We also show how Web pages can be designed to minimize traffic to the site.


ieee conference on mass storage systems and technologies | 2001

Efficient Algorithms for Persistent Storage Allocation

Arun Iyengar; Shudong Jin; Jim Challenger

Efficient disk storage is a crucial component formany applications. The commonly used method of storing data on disk using file systems or databases incurs significant overhead which can be a problem for applications which need to frequently access and update a large number of objects. This paper presents efficient algorithms for managing persistent storage which usually only require a single seek for allocations and deallocations and allow the state of the system to be fully recoverable in the event of a failure. Our system has been deployed for persistently storing data at the most accessed sport and event Web site hosted by IBM and results in considerable performance improvements over databases and file systems forWeb-related workloads.


Journal of Systems and Software | 2003

Techniques for efficiently allocating persistent storage

Arun Iyengar; Shudong Jin; Jim Challenger

Efficient disk storage is a crucial component for many applications. The commonly used method of storing data on disk using file systems or databases incurs significant overhead which can be a problem for applications which need to frequently access and update a large number of objects. This paper presents efficient algorithms for managing persistent storage which usually only require a single seek for allocations and deallocations and allow the state of the system to be fully recoverable in the event of a failure. We have developed a portable implementation of our algorithms in Java. Results in this paper demonstrate the superiority of our approach over file systems and databases for Web-related workloads. Our system has been a crucial component for persistently storing data at a number of highly accessed Web sites. We describe our experiences from a large real deployment of our system.


usenix symposium on internet technologies and systems | 1997

Improving web server performance by caching dynamic data

Arun Iyengar; Jim Challenger

Researchain Logo
Decentralizing Knowledge