Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Renu Tewari is active.

Publication


Featured researches published by Renu Tewari.


COMPCON '96. Technologies for the Information Superhighway Digest of Papers | 1996

A scalable and highly available web server

Daniel M. Dias; William A. Kish; Rajat Mukherjee; Renu Tewari

We describe a prototype scalable and highly available web server, built on an IBM SP-2 system, and analyze its scalability. The system architecture consists of a set of logical front-end or network nodes and a set of back-end or data nodes connected by a switch, and a load balancing component. A combination of TCP routing and Domain Name Server (DNS) techniques are used to balance the load across the Front-end nodes that run the Web (httpd) daemons. The scalability achieved is quantified and compared with that of the known DNS technique. The load on the back-end nodes is balanced by striping the data objects across the back-end nodes and disks. High availability is provided by detecting node or daemon failures and reconfiguring the system appropriately. The scalable and highly available web server is combined with parallel databases, and other back-end servers, to provide integrated scalable and highly available solutions.


international conference on computer communications | 2001

On the effectiveness of DNS-based server selection

Anees Shaikh; Renu Tewari; Mukesh Agrawal

The rapid growth of the Internet in users and content has fueled extensive efforts to improve the users overall Internet experience. A growing number of providers deliver content from multiple servers or proxies to reduce response time by moving content closer to end users. An increasingly popular mechanism to direct clients to the closest point of service is DNS-based redirection, due to its transparency and generality. This paper draws attention to two of the main issues in using DNS: (1) the negative effects of reducing or eliminating the cache lifetimes of DNS information, and (2) the implicit assumption that client nameservers are indicative of actual client location and performance. We quantify the impact of reducing DNS TTL values on Web access latency and show that it can increase name resolution latency by two orders of magnitude. Using HTTP and DNS server logs, as well as a large number of dial-up ISP clients, we measure client-nameserver proximity and show that a significant fraction are distant, more than 8 hops apart. Finally, we suggest protocol modifications to improve the accuracy of DNS-based redirection schemes.


international conference on distributed computing systems | 1999

Design considerations for distributed caching on the Internet

Renu Tewari; Michael Dahlin; Harrick M. Vin; Jonathan S. Kay

We describe the design and implementation of an integrated architecture for cache systems that scale to hundreds or thousands of caches with thousands to millions of users. Rather than simply try to maximize hit rates, we take an end-to-end approach to improving response time by also considering hit times and miss times. We begin by studying several Internet caches and workloads, and we derive three core design principles for large scale distributed caches: minimize the number of hops to locate and access data on both hits and misses; share data among many users and scale to many caches; and cache data close to clients. Our strategies for addressing these issues are built around a scalable, high-performance data-location service that tracks where objects are replicated. We describe how to construct such a service and how to use this service to provide direct access to remote data and push-based data replication. We evaluate our system through trace-driven simulation and find that these strategies together provide response time speedups of 1.27 to 2.43 compared to a traditional three-level cache hierarchy for a range of trace workloads and simulated environments.


ieee computer society international conference | 1995

Buffering and caching in large-scale video servers

Asit Dan; Daniel M. Dias; Rajat Mukherjee; Dinkar Sitaram; Renu Tewari

Video-on-demand servers are characterized by stringent real-time constraints, as each stream requires isochronous data playout. The capacity of the system depends on the acceptable jitter per stream (the number of data blocks that do not meet their real-time constraints). Per-stream read-ahead buffering avoids the disruption in playback caused by variations in disk access time and queuing delays. With heavily skewed access patterns to the stored video data, the system is often disk arm-bound. In such cases, serving video streams from a memory cache can result in a substantial reduction in server cost. In this paper, we study the cost-performance trade-offs of various buffering and caching strategies that can be used in a large-scale video server. We first study the cost impact of varying the buffer size, disk utilization and the disk characteristics on the overall capacity of the system. Subsequently, we study the cost-effectiveness of a technique for memory caching across streams that exploits temporal locality and workload fluctuations.


international conference on data engineering | 2003

DBProxy: a dynamic data cache for web applications

Khalil Amiri; Sanghyun Park; Renu Tewari; Sriram Padmanabhan

The majority of web pages served today are generated dynamically, usually by an application server querying a back-end database. To enhance the scalability of dynamic content serving in large sites, application servers are offloaded to front-end nodes, called edge servers. The improvement from such application offloading is marginal, however, if data is still fetched from the origin database system. To further improve scalability and cut response times, data must be effectively cached on such edge servers. The scale of deployment of edge servers and the rising costs of their administration demand that such caches be self-managing and adaptive. In this paper, we describe DBProxy, an edge-of-network semantic data cache for web applications. DBProxy is designed to adapt to changes in the workload in a transparent and graceful fashion by caching a large number of overlapping and dynamically changing “materialized views”. New “views” are added automatically while others may be discarded to save space. Inthis paper, we discuss the challenges of designingandimplementing such a dynamic edge data cache, and describe our proposed solutions.


IEEE Transactions on Knowledge and Data Engineering | 2003

Adaptive leases: a strong consistency mechanism for the World Wide Web

Venkata Duvvuri; Prashant J. Shenoy; Renu Tewari

We argue that weak cache consistency mechanisms supported by existing Web proxy caches must be augmented by strong consistency mechanisms to support the growing diversity in application requirements. Existing strong consistency mechanisms are not appealing for Web environments due to their large state space or control message overhead. We focus on the lease approach that balances these trade-offs and present analytical models and policies for determining the optimal lease duration. We present extensions to the HTTP protocol to incorporate leases and then implement our techniques in the Squid proxy cache and the Apache Web server. Our experimental evaluation of the leases approach shows that: 1) our techniques impose modest overheads even for long leases (a lease duration of 1 hour requires state to be maintained for 1030 leases and imposes an per-object overhead of a control message every 33 minutes), 2) leases yields a 138-425 percent improvement over existing strong consistency mechanisms, and 3) the implementation overhead of leases is comparable to existing weak consistency mechanisms.


conference on multimedia computing and networking | 1997

Resource-based caching for Web servers

Renu Tewari; Harrick M. Vin; Asit Dan; Dinkar Sitaram

The WWW employs a hierarchical data dissemination architecture in which hyper-media objects stored at a remote server are served to clients across the Internet, and cached on disks at intermediate proxy servers. One of the objectives of web caching algorithms is to maximize the data transferred from the proxy servers or cache hierarchies. Current web caching algorithms are designed only for text and image data. Recent studies predict that within the next five years more than half the objects stored at web servers will contain continuous media data. To support these trends, the next generation proxy cache algorithms will need to handle multiple data types, each with different cache resource usage, for a cache limited by both bandwidth and space. In this paper, we present a resource-based caching (RBC) algorithm that manages the heterogeneous requirements of multiple data types. The RBC algorithm (1) characterizes each object by its resource requirement and a caching gain, (2) dynamically selects the granularity of the entity to be cached that minimally uses the limited cache resource (i.e., bandwidth or space), and (3) if required, replaces the cached entities based on their cache resource usage and caching gain. We have performed extensive simulations to evaluate our caching algorithm and present simulation results that show that RBC outperforms other known caching algorithms.


international workshop on quality of service | 2002

An observation-based approach towards self-managing Web servers

Prashant Pradhan; Renu Tewari; Sambit Sahu; Abhishek Chandra; Prashant J. Shenoy

The Web server architectures that provide performance isolation, service differentiation, and QoS guarantees rely on external administrators to set the right parameter values for the desired performance. Due to the complexity of handling varying workloads and bottleneck resources, configuring such parameters optimally becomes a challenge. In this paper we describe an observation-based approach for self-managing Web servers that can adapt to changing workloads while maintaining the QoS requirements of different classes. In this approach, the system state is monitored continuously and parameter values of various system resources-primarily the accept queue and the CPU-are adjusted to maintain the system-wide QoS goals. We implement our techniques using the Apache Web server and the Linux operating system. We first demonstrate the need to manage different resources in the system depending on the workload characteristics. We then experimentally demonstrate that our observation-based system can adapt to workload changes by dynamically adjusting the resource shares in order to maintain the QoS goals.


international conference on multimedia computing and systems | 1996

Design and performance tradeoffs in clustered video servers

Renu Tewari; Rajat Mukherjee; Daniel M. Dias; Harrick M. Vin

We investigate the suitability of clustered architectures for designing scalable multimedia servers. Specifically, we evaluate the effects of: (i) architectural design of the cluster; (ii) the size of the unit of data interleaving; and (iii) read ahead buffering and scheduling on the real time performance guarantees provided by the server. To analyze the effects of these parameters, we develop an analytical model of clustered multimedia servers, and then validate it through extensive simulations, The results of our analysis have formed the basis of our prototype implementation based on an RS/6000 Scalable Parallel (SP) machine. We briefly describe the prototype and discuss some implementation details.


international world wide web conferences | 2002

Cooperative leases: scalable consistency maintenance in content distribution networks

Anoop George Ninan; Purushottam Kulkarni; Prashant J. Shenoy; Krithi Ramamritham; Renu Tewari

In this paper, we argue that cache consistency mechanisms designed for stand-alone proxies do not scale to the large number of proxies in a content distribution network and are not flexible enough to allow consistency guarantees to be tailored to object needs. To meet the twin challenges of scalability and flexibility, we introduce the notion of cooperative consistency along with a mechanism, called cooperative leases, to achieve it. By supporting Δ-consistency semantics and by using a single lease for multiple proxies, cooperative leases allows the notion of leases to be applied in a flexible, scalable manner to CDNs. Further, the approach employs application-level multicast to propagate server notifications to proxies in a scalable manner. We implement our approach in the Apache web server and the Squid proxy cache and demonstrate its efficacy using a detailed experimental evaluation. Our results show a factor of 2.5 reduction in server message overhead and a 20% reduction in server state space overhead when compared to original leases albeit at an increased inter-proxy communication overhead.

Collaboration


Dive into the Renu Tewari's collaboration.

Researchain Logo
Decentralizing Knowledge