Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Gopinath is active.

Publication


Featured researches published by K. Gopinath.


symposium on principles of programming languages | 1989

Copy elimination in functional languages

K. Gopinath; John L. Hennessy

Copy elimination is an important optimization for compiling functional languages. Copies arise because these languages lack the concepts of state and variable; hence updating an object involves a copy in a naive implementation. Copies are also possible if proper targeting has not been carried out inside functions and across function calls. Targeting is the proper selection of a storage area for evaluating an expression. By abstracting a collection of functions by a target operator, we compute targets of function bodies that can then be used to define an optimized interpreter to eliminate copies due to updates and copies across function calls. The language we consider is typed lambda calculus with higher-order functions and special constructs for array operations. Our approach can eliminate copies in divide and conquer problems like quicksort and bitonic sort that previous approaches could not handle. We also present some results of implementing a compiler for a single assignment language called SAL on some small but tough programs. Our results indicate that it is possible to approach a performance comparable to imperative languages like Pascal.


international parallel and distributed processing symposium | 2000

A multi-tier RAID storage system with RAID1 and RAID5

Nitin Muppalaneni; K. Gopinath

Redundant Arrays of Inexpensive Disks (RAID) is a popular technique used to improve the reliability and performance of secondary storage. Of various levels of RAID discussed, RAID1 and RAID5 have become more popular. Mirroring or RAID1 maintains multiple copies of the data, generally provides best performance and is easier to configure. Rotating parity scheme or RAID5 is the least expensive RAID scheme with good large update performance. It suffers from poor small update performance and performance drops sharply when a diskfails and the array enters degraded mode. Configuring RAID5 is more involved. This paper presents the design and implementation of a host-based driver for a multi-tier RAID storage system, currently with 2 tiers: a small RAID1 tier and a larger RAID5 tier. Based on access patterns, the driver automatically migrates frequently accessed data to RAID1 while demoting not so frequently accessed data to RAID5. The prototype provides reliable persistence semantics for data migration between the tiers using ordered updates. Mechanisms are separated from policies through an API so that any desired policy can be implemented in trusted user processes. Finally, we present comparison of the performance of our system with comparable systems using striping and RAID5.


ACM Transactions on Storage | 2011

PRESIDIO: A Framework for Efficient Archival Data Storage

Lawrence L. You; Kristal T. Pollack; Darrell D. E. Long; K. Gopinath

The ever-increasing volume of archival data that needs to be reliably retained for long periods of time and the decreasing costs of disk storage, memory, and processing have motivated the design of low-cost, high-efficiency disk-based storage systems. However, managed disk storage is still expensive. To further lower the cost, redundancy can be eliminated with the use of interfile and intrafile data compression. However, it is not clear what the optimal strategy for compressing data is, given the diverse collections of data. To create a scalable archival storage system that efficiently stores diverse data, we present PRESIDIO, a framework that selects from different space-reduction efficent storage methods (ESMs) to detect similarity and reduce or eliminate redundancy when storing objects. In addition, the framework uses a virtualized content addressable store (VCAS) that hides from the user the complexity of knowing which space-efficient techniques are used, including chunk-based deduplication or delta compression. Storing and retrieving objects are polymorphic operations independent of their content-based address. A new technique, harmonic super-fingerprinting, is also used for obtaining successively more accurate (but also more costly) measures of similarity to identify the existing objects in a very large data set that are most similar to an incoming new object. The PRESIDIO design, when reported earlier, had comprehensively introduced for the first time the notion of deduplication, which is now being offered as a service in storage systems by major vendors. As an aid to the design of such systems, we evaluate and present various parameters that affect the efficiency of a storage system using empirical data.


computer aided verification | 2005

Improved probabilistic models for 802.11 protocol verification

Amitabha Roy; K. Gopinath

The IEEE 802.11 protocol is a popular standard for wireless local area networks. Its medium access control layer (MAC) is a carrier sense multiple access with collision avoidance (CSMA/CA) design and includes an exponential backoff mechanism that makes it a possible target for probabilistic model checking. In this work, we identify ways to increase the scope of application of probabilistic model checking to the 802.11 MAC. Current techniques model only specialized cases of minimum size. To work around this problem, we identify properties of the protocol that can be used to simplify the models and make verification feasible. Using these observations, we present generalized probabilistic timed automata models that are independent of the number of stations. We optimize these through a novel abstraction technique while preserving probabilistic reachability measures. We substantiate our claims of a significant reduction due to our optimization with results from using the probabilistic model checker PRISM.


IEEE Computer | 2011

Software Bloat and Wasted Joules: Is Modularity a Hurdle to Green Software?

Suparna Bhattacharya; K. Gopinath; Karthick Rajamani; Manish Gupta

The paper discusses that adopting an integrated analysis of software bloat and hardware platforms is necessary to realizing modular software thats also green.


international conference on cloud computing | 2013

Elastic Resources Framework in IaaS, Preserving Performance SLAs

Mohit Dhingra; J. Lakshmi; S. K. Nandy; Chiranjib Bhattacharyya; K. Gopinath

Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.


workshop on power aware computing and systems | 2011

The interplay of software bloat, hardware energy proportionality and system bottlenecks

Suparna Bhattacharya; Karthick Rajamani; K. Gopinath; Manish Gupta

In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.


international workshop on model checking software | 2001

A SPIN-based model checker for telecommunication protocols

Vivek K. Shanbhag; K. Gopinath

Telecommunication protocol standards have in the past and typically still use both an English description of the protocol (sometimes also followed with a behavioural and SDL model) and an ASN.1 specification of the data-model, thus likely making the specification incomplete. ASN.1 is an ITU/ISO data definition language which has been developed to describe abstractly the values protocol data units can assume; this is of considerable interest for model checking as subtyping in ASN.1 can be used to constrain/construct the state space of the protocol accurately. However, with current practice, any change to the English description cannot easily be checked for consistency while protocols are being developed. In this work, we have developed a SPIN-based tool called EASN (Enhanced ASN.1) where the behaviour can be formally specified through a language based upon Promela for control structures but with data models from ASN.1. An attempt is also made to use international standards (X/Open std on ASN.1/C++ translation) as available so that the tool can be realised with pluggable components. One major design criterion is to enable incremental computation wherever possible (for example: hash values, consistency between alternate representations of state). We have used EASN to validate a simplified model of RLC (Radio Link Control) in the W-CDMA stack that imports data types from its associated ASN.1 model. In this paper, we discuss the motivation and design of the EASN language, the architecture and implementation of the verification tool for EASN and some preliminary performance indicators.


conference on object oriented programming systems languages and applications | 2013

Combining concern input with program analysis for bloat detection

Suparna Bhattacharya; K. Gopinath; Mangala Gowri Nanda

Framework based software tends to get bloated by accumulating optional features (or concerns) just-in-case they are needed. The good news is that such feature bloat need not always cause runtime execution bloat. The bad news is that often enough, only a few statements from an optional concern may cause execution bloat that may result in as much as 50% runtime overhead. We present a novel technique to analyze the connection between optional concerns and the potential sources of execution bloat induced by them. Our analysis automatically answers questions such as (1) whether a given set of optional concerns could lead to execution bloat and (2) which particular statements are the likely sources of bloat when those concerns are not required. The technique combines coarse grain concern input from an external source with a fine-grained static analysis. Our experimental evaluation highlights the effectiveness of such concern augmented program analysis in execution bloat assessment of ten programs.


ieee international conference on high performance computing data and analytics | 1996

Program analysis for page size selection

K. Gopinath; Aniruddha P. Bhutkar

To support high performance architectures with multiple page sizes, it is necessary to assign proper page sizes for array memory in order to improve TLB performance as well as reduce memory contention during program execution. Typically, while a smaller page size causes higher TLB contention, a larger page size causes higher memory contention and fragmentation but also has the effect of prefetching pages required in future thereby reducing the number of cold page faults. Each array in a program contributes to these costs/benefits depending upon how it is referenced in the program. The page size assignment analysis determines a proper page size for every array by analyzing memory reference patterns (which is shown to be NP-hard). We discuss various policies that can be followed for page size assignment in order to maximize performance along with cost models and present algorithms for page size selection.

Collaboration


Dive into the K. Gopinath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. K. Srivastva

Indian Council of Agricultural Research

View shared research outputs
Top Co-Authors

Avatar

Aravinda Prasad

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Ganesh M. Narayan

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Narendra Kumar

Indian Council of Agricultural Research

View shared research outputs
Top Co-Authors

Avatar

Vivek K. Shanbhag

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Amitabha Roy

Indian Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge