Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. Gopinath is active.

Publication


Featured researches published by B. Gopinath.


measurement and modeling of computer systems | 1995

An inter-reference gap model for temporal locality in program behavior

Vidyadhar Phalke; B. Gopinath

The property of locality in program behavior has been studied and modelled extensively because of its application to memory design, code optimization, multiprogramming etc. We propose a k order Markov chain based scheme to model the sequence of time intervals between successive references to the same address in memory during program execution. Each unique address in a program is modelled separately. To validate our model, which we call the Inter-Reference Gap (IRG) model, we show substantial improvements in three different areas where it is applied. (1) We improve upon the miss ratio for the Least Recently Used (LRU) memory replacement algorithm by up to 37%. (2) We achieve up to 22% space-time product improvement over the Working Set (WS) algorithm for dynamic memory management. (3) A new trace compression technique is proposed which compresses up to 2.5% with zero error in WS simulations and up to 3.7% error in the LRU simulations. All these results are obtained experimentally, via trace driven simulations over a wide range of cache traces, page reference traces, object traces and database traces.


IEEE Transactions on Computers | 1997

Compression-based program characterization for improving cache memory performance

Vidyadhar Phalke; B. Gopinath

It is well known that compression and prediction are interrelated in that high compression implies good predictability, and vice versa. We use this correlation to find predictable properties of program behavior and apply them to appropriate cache management tasks. In particular, we look at two properties of program references: (1) Inter Reference Gaps: defined as the time interval between successive references to the same address by the processor, and (2) Cache Misses: references which access the next level of the memory hierarchy. Using compression, we show that these two properties are highly predictable and exploit them to improve Cache Replacement and Cache Prefetching, respectively. Using trace-driven simulations on SPEC and Dinero benchmarks, we demonstrate the performance of our predictive schemes, and compare them with other methods for doing the same. We show that, using our predictive replacement scheme, miss ratio in cache memories can be improved up to 43 percent over the well-known Least Recently Used (LRU) algorithm, which covers the gap between the LRU and the off-line optimal (MIN) miss ratios, by more than 84 percent. For cache prefetching, we show that our scheme eliminates up to 62 percent of the total misses in D-caches. An equivalent sequential prefetch scheme only removes up to 42 percent of the misses. For I-caches, our scheme performs almost the same as the sequential scheme and removes up to 78 percent of the misses.


modeling analysis and simulation on computer and telecommunication systems | 1995

Program modelling via inter-reference gaps and applications

Vidyadhar Phalke; B. Gopinath

Locality of reference in program behavior has been studied and modelled extensively because of its application to CPU, cache and virtual memory design, code optimization, multiprogramming etc. In this paper we propose a scheme based on Markov chains for modelling the time interval between successive references to the same address in a program execution. Using this technique and trace driven simulations, it is shown that memory references are predictable and repetitive. This is used to improve miss ratios of memory replacement algorithms. Using trace driven simulations over a wide range of traces we get improvements up to 35% over the least recently used (LRU) replacement algorithm.<<ETX>>


international symposium on memory management | 1995

A Miss History-based Architecture for Cache Prefetching

Vidyadhar Phalke; B. Gopinath

This paper describes a hardware controlled cache prefetching technique which uses the past behavior of misses to prefetch. We present a low cost prefetch-on-miss architecture for implementing the prefetcher. Its requirements are (1) less than 6.25% increase in the main memory size, and (2) a bidirectional address bus. We evaluate the performance of our prefetcher using trace driven simulations of ATUM and SPEC benchmark suits. For a 4-way set associative 32KB cache, with at most one prefetch on a miss, we obtain miss ratio improvements over a non-prefetching scheme in the range of 23 to 37%. This improvement is obtained at the cost of increasing the bus traffic up to 39% above the non-prefetching scheme. In comparison to the sequential method, the miss ratio improves up to 14% and the bus traffic reduces up to 17%. Similar improvements over the sequential technique are obtained for larger caches and direct mapped caches.


Journal of Network and Systems Management | 1999

Supporting Temporal Views in a ManagementInformation Base

Vassilis J. Tsotras; Vidyadhar Phalke; Anil Kumar; B. Gopinath

In many network management applications, likepost-mortem fault analysis or performance trendsprofiling, it is advantageous to have the ability toview the state of the network as it was at some time in the past. To support such Temporal Views anefficient data organization, or access method, is neededfor storing and updating network related data (as thenetwork evolves over time) and for retrieving requested past network states. For applications where thenetwork manager is not interested in the full (and maybetoo large) snapshot of a past network state it is usefulif partial state snapshots can be extracted quickly. It is thus of particular interest toconstruct an access method that can efficiently supportPartial Temporal Views. Efficiency implies that arequested partial temporal view should be constructed directly, without first computing the elaboratefull temporal view. In this paper we present a newaccess method (called the Neighbor History Index) forthis problem. One of the advantages of this method is that the update processing is independent ofthe evolution size (the total number of changes in theevolution). In addition, our method uses a small diskspace overhead. We then present a general framework for organizing time-evolving network data. Ourframework distinguishes between flat and hierarchicalevolutions and subsequently between flat andhierarchical temporal views. We also provide a way toefficiently construct temporal views on hierarchicalevolutions. This paper shows that supporting temporalviews on flat or hierarchical evolutions is notexpensive: our solutions use small space overhead, havesmall updating and compute temporal viewsfast.


Proceedings of SPIE | 1996

Scalable approach to providing constant rate services

T. M. Nagaraj; Vidyadhar Phalke; B. Gopinath

High speed networks must provide constant rate services to handle applications like telephony, audio and video, multimedia services, and real-time control. An important issue in providing constant rate services is scaleability. In this paper we propose a scaleable approach, called the reduced control complexity network, to providing constant rate service. Our approach uses an asynchronous network that guarantees lossless transport of constant rate data. First, we consider the problem of providing FIFO order, lossless and fault-free transport of one constant rate connection using asynchronous network elements. We accurately characterize the behavior of the traffic as it goes through the network. We find that the minimum buffer size required to guarantee lossless transport grows nearly linearly with the number of network elements traversed by the connection. We propose an asynchronous switch element in which each connection is allocated a logically separate buffer space. We use a non-work conserving scheduling policy to guarantee the service requirement of all connections. This simplifies the problem of reasoning about network behavior. We use a static, table-driven scheduler that can be easily implemented to work at high speeds. Finally, we address the problem of generating the schedule table to meet the service rates of connections.


Archive | 1994

Directly programmable distribution element

B. Gopinath; David Kurshan; Zoran Miljanic


vehicular technology conference | 1993

Channel cost of mobility

Gerard J. Foschini; B. Gopinath; Zoran Miljanic


Archive | 1995

Composition of systems of objects by interlocking coordination, projection, and distribution

B. Gopinath; David Kurshan


Archive | 1994

Directly programmable networks

B. Gopinath; David Kurshan

Collaboration


Dive into the B. Gopinath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anil Kumar

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anil Kumar

Indian Institute of Technology Kanpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge