Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rakesh D. Barve is active.

Publication


Featured researches published by Rakesh D. Barve.


measurement and modeling of computer systems | 1999

Modeling and optimizing I/O throughput of multiple disks on a bus

Rakesh D. Barve; Elizabeth A. M. Shriver; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Jeffrey Scott Vitter

In modern I O architectures multiple disk drives are at tached to each I O controller A study of the performance of such architectures under I O intensive workloads has re vealed a performance impairment that results from a pre viously unknown form of convoy behavior in disk I O In this paper we describe measurements of the read perfor mance of multiple disks that share a SCSI bus under a heavy workload and develop and validate formulas that accurately characterize the observed performance to within on several platforms for I O sizes in the range KB Two terms in the formula clearly characterize the lost perfor mance seen in our experiments We describe techniques to deal with the performance impairment via user level work arounds that achieve greater overlap of bus transfers with disk seeks and that increase the percentage of transfers that occur at the full bus bandwidth rather than at the lower bandwidth of a disk head Experiments show bandwidth improvements of when using these user level tech niques but only in the case of large I Os


acm symposium on parallel algorithms and architectures | 1996

Simple randomized mergesort on parallel disks

Rakesh D. Barve; Edward F. Grove; Jeffrey Scott Vitter

We consider the problem of sorting a file of N records on theD-disk model of parallel I/0 [VS94] in which there are two sourcesof parallehsm. Records are transferred to and from diskconcurrently in blocks of B con-tiguous records. In each I/Ooperation, up to one block can be transferred to or from each ofthe D disks in parallel. We propose a simple, eficient, randomizedmergesort algorithm called SRM that uses a forecast-and-flushapproach to overcome the inherent difficulties of simple merging onparallel disks. SRM exhibits a limited use of randomization andalso has a useful deterministic version. Generalizing theforecasting technique of [Knu73], our algorithm, is able to readin, at any time, the right block from any disk, and using thetechnique of flushing, our algorithm evicts, without any I/0overhead, just the right blocks from memory to make space for newones to be read in. The disk layout of SRM is such that it enjoysperfect write parallelism, avoiding fundamental inefficiencies ofprevious mergesort algorithms. Our analysis technique involves anovel reduction to various maximum occupancy problems. We provethat the expected I/O performance of SRM is efficient under varyingsizes of memory and that it compares favorably in practice todisk-striped mergesort (DSM). Our studies indicate that SRMoutperforms DSM even when the number D of parallel disks is fairlysmall.


workshop on i/o in parallel and distributed systems | 1997

Competitive parallel disk prefetching and buffer management

Rakesh D. Barve; Mahesh Kallahalla; Peter J. Varman; Jeffrey Scott Vitter

E-mail: [email protected] October 5, 1998We provide a competitive analysis framework for online prefetching and buffermanagement algorithms in parallel IrO systems, using a read-once model of blockreferences. This has widespread applicability to key IrO-bound applications suchas external merging and concurrent playback of multiple video streams. Tworealistic lookahead models, global lookahead and local lookahead, are defined.Algorithms NOM and GREED, based on these two forms of lookahead areanalyzed for shared buffer and distributed buffer configurations, both of which


foundations of computer science | 1999

A theoretical framework for memory-adaptive algorithms

Rakesh D. Barve; Jeffrey Scott Vitter

External memory algorithms play a key role in database management systems and large scale processing systems. External memory algorithms are typically tuned for efficient performance given a fixed, statically allocated amount of internal memory. However, with the advent of real-time database system and database systems based upon administratively defined goals, algorithms must increasingly be able to adapt in an online manner when the amount of internal memory allocated to them changes dynamically and unpredictably. We present a theoretical and applicable framework for memory-adaptive algorithms (or simply MA algorithms). We define the competitive worst-case notion of what it means for an MA algorithm to be dynamically optimal and prove fundamental lower bounds on the performance of MA algorithms for problems such as sorting, standard matrix multiplication, and several related problems. Our main tool for proving dynamic optimality is the notion of resource consumption, which measures how efficiently an MA algorithm adapts itself to memory fluctuations. We present the first dynamically optimal algorithm for sorting (based upon mergesort), permuting, FFT, permutation networks, buffer trees, (standard) matrix multiplication, and LU decomposition. In each case, dynamic optimality is demonstrated via a potential function argument showing that the algorithms resource consumption is within a constant factor of optimal.


SIAM Journal on Computing | 2000

Application-Controlled Paging for a Shared Cache

Rakesh D. Barve; Edward F. Grove; Jeffrey Scott Vitter

We propose a provably efficient application-controlled global strategy for organizing a cache of size k shared among P application processes. Each application has access to information about its own future page requests, and by using that local information along with randomization in the context of a global caching algorithm, we are able to break through the conventional


conference on learning theory | 1996

On the complexity of learning from drifting distributions

Rakesh D. Barve; Philip M. Long

H_k \sim \ln k


measurement and modeling of computer systems | 1998

Modeling and optimizing I/O throughput of multiple disks on a bus (summary)

Rakesh D. Barve; Elizabeth A. M. Shriver; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Jeffrey Scott Vitter

lower bound on the competitive ratio for the caching problem. If the P application processes always make good cache replacement decisions, our online application-controlled caching algorithm attains a competitive ratio of


foundations of computer science | 1995

Application-controlled paging for a shared cache

Rakesh D. Barve; Edward F. Grove; Jeffrey Scott Vitter

2H_{P-1}+2 \sim 2 \ln P


workshop on i/o in parallel and distributed systems | 1999

Round-like behavior in multiple disks on a bus

Rakesh D. Barve; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Elizabeth A. M. Shriver; Jeffrey Scott Vitter

. Typically, P is much smaller than k, perhaps by several orders of magnitude. Our competitive ratio improves upon the 2P+2 competitive ratio achieved by the deterministic application-controlled strategy of Cao, Felten, and Li. We show that no online application-controlled algorithm can have a competitive ratio better than min{HP-1,Hk}, even if each application process has perfect knowledge of its individual page request sequence. Our results are with respect to a worst-case interleaving of the individual page request sequences of the P application processes. We introduce a notion of fairness in the more realistic situation when application processes do not always make good cache replacement decisions. We show that our algorithm ensures that no application process needs to evict one of its cached pages to service some page fault caused by a mistake of some other application. Our algorithm not only is fair but remains efficient; the global paging performance can be bounded in terms of the number of mistakes that application processes make.


Archive | 1998

System and method for modeling and optimizing I/O throughput of multiple disks on a bus

Rakesh D. Barve; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Elizabeth A. M. Shriver; Jeffrey S. Vitter

We consider two models of on-line learning of binary-valued functions from drifting distributions due to Bartlett. We show that if each example is drawn from a joint distribution which changes in total variation distance by at most O( 3=(d log(1= ))) between trials, then an algorithm can achieve a probability of a mistake at most worse than the best function in a class of VC-dimension d. We prove a corresponding necessary condition of O( 3=d). Finally, in the case that a xed function is to be learned from noise-free examples, we show that if the distributions on the domain generating the examples change by at most O( 2=(d log(1= ))), then any consistent algorithm learns to within accuracy . 3 For a list of the typographical symbols used, please consult Latex, by Leslie Lamport.

Collaboration


Dive into the Rakesh D. Barve's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip M. Long

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge