Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anupam Bhide is active.

Publication


Featured researches published by Anupam Bhide.


international conference on data engineering | 1993

A simple analysis of the LRU buffer policy and its relationship to buffer warm-up transient

Anupam Bhide; Asit Dan; Daniel M. Dias

A simple analysis for the transient buffer hit probability for a system starting with an empty buffer is presented. The independent reference model (IRM) is used for buffer accesses. It is shown that the expected buffer hit probability when the buffer becomes full is virtually identical to the steady state buffer hit probability when the replacement policy is least recently used (LRU). The method is generalized to estimate the transient behavior of the LRU policy starting with a non-empty buffer. It is shown that this method can be used to estimate the effect of a load surge on the buffer hit probability. It is also shown that after a short load surge, it can take much longer than the surge duration for the buffer hit probability to return to its steady state value.<<ETX>>


international conference on management of data | 1992

An efficient scheme for providing high availability

Anupam Bhide; Ambuj Goyal; Hui-I Hsiao; Anant Jhingran

Replication at the partition level is a promising approach for increasing availability in a Shared Nothing architecture. We propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. Our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primarys buffer. A log server node (hardened against failures) maintains the log for each node. If a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primarys last transaction-consistent state. We study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure-free processing.


international conference on distributed computing systems | 1991

A comparison of two approaches to build reliable distributed file servers

Anupam Bhide; Elmootazbellah Nabil Elnozahy; Stephen P. Morgan; A. Siegel

Several existing distributed file systems provide reliability by server replication. An alternative approach is to use dual-ported disks accessible to a server and a backup. The two approaches are compared by examining an example of each. Deceit is a replicated file server that emphasizes flexibility. HA-NFS is an example of the second approach that emphasizes efficiency and simplicity. The two file servers run on the same hardware and implement SUNs NFS protocol. The comparison shows that replicated servers are more flexible and tolerant of a wider variety of faults. On the other hand, the dual-ported disks approach is more efficient and simpler to implement. When tolerating single failure, dual-ported disks also give somewhat better availability.<<ETX>>


workshop on management of replicated data | 1990

Implicit replication in a network file server

Anupam Bhide; Elmootazbellah Nabil Elnozahy; Stephen P. Morgan

The design and implementation of a highly available network file server (HA-NFS) is reported. It is implemented on a network of workstations from the IBM RISC System/6000 family. HA-NFS servers preserve the semantics of the NFS protocol and can be used by existing NFS clients without modification. Therefore, existing application programs can benefit from highly availability without alteration. HA-NFS achieves storage reliability by (optionally) replicating files on different disks. However, all copies of the same file are controlled by a single server, reducing the cost of ensuring consistency. To achieve server reliability, each server is implicitly replicated by a backup that can access the servers disks if the server fails. During normal operation, the backup monitors the liveness of the server but does not maintain information about the servers internal state. Each server maintains a disk log that records state information normally kept in memory.<<ETX>>


ieee international symposium on fault tolerant computing | 1993

A case for fault-tolerant memory for transaction processing

Anupam Bhide; Daniel M. Dias; Nagui Halim; T. Basil Smith; Francis Nicholas Parr

For database transaction processing, the authors compare the relative price-performance of storing data in volatile memory (V-mem), fault-tolerant non-volatile memory (FT-mem), and disk. First, they extend Grays five-minute rule, which compares the relative cost of storing data in volatile memory as against disk for read-only data, to read-write data. Second, they show that because of additional write overhead, FT-mem has a higher advantage over V-mem than previously thought. Previous studies comparing volatile and non-volatile memories have focused on the response time advantages of putting log data in non-volatile memory. The authors show that there is a direct reduction in disk I/O, which leads to a much larger savings in cost using an FT-mem buffer. Third, the five-minute model is a simple model that assumes knowledge of inter-access times for data items. The authors present a more realistic model that assumes an LRU buffer management policy. They combine this with the recovery time constraint and study the resulting price-performance. It is shown that the use of an FT-mem buffer can lead to a significant benefit in terms of overall price-performance.


workshop on management of replicated data | 1992

Experiences with two high availability designs (replication techniques)

Anupam Bhide

The author compares two replication schemes designed to provide high availability in an efficient manner: HA-NFS and ARM. Both schemes use the primary copy method for replica control. Both schemes were designed with the goal of minimizing the overheads during failure-free operation. In a primary copy scheme these overheads primarily consist of updating the secondary replicas. The two schemes were designed for different applications; ARM for providing high availability in a Shared Nothing database system, HA-NFS for providing high availability in an NFS file server environment. They also differ in that the HA-NFS scheme uses dual-ported disks to provide high availability, the ARM scheme uses replication over a network. In spite of the seemingly major differences, the schemes have the same key conceptual idea viz. propagating updates asynchronously to remote replicas. In addition to this idea, HA-NFS uses an unusual hardware arrangement in the form of dual-ported disks to further lower the overhead of updating secondary replicas.<<ETX>>


Archive | 1994

Asynchronous replica management in shared nothing architectures

Anupam Bhide; George P. Copeland; Ambuj Goyal; Hui-I Hsiao; Anant Jhingran; C. Mohan


Archive | 1996

Method and system for database load balancing

Anupam Bhide; Daniel M. Dias; Ambuj Goyal; Francis Nicholas Parr; Joel L. Wolf


USENIX Winter | 1991

A Highly Available Network File Server.

Anupam Bhide; Elmootazbellah Nabil Elnozahy; Stephen P. Morgan


Archive | 1993

Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk

Anupam Bhide; Daniel M. Dias

Researchain Logo
Decentralizing Knowledge