Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravi Krishnamurthy is active.

Publication


Featured researches published by Ravi Krishnamurthy.


international conference on management of data | 1992

Query optimization for parallel execution

Sumit Ganguly; Waqar Hasan; Ravi Krishnamurthy

The decreasing cost of computing makes it economically viable to reduce the response time of decision support queries by using parallel execution to exploit inexpensive resources. This goal poses the following query optimization problem: Minimize response time subject to constraints on throughput, which we motivate as the dual of the traditional DBMS problem. We address this novel problem in the context of Select-Project-Join queries by extending the execution space, cost model and search algorithm that are widely used in commercial DBMSs. We incorporate the sources and deterrents of parallelism in the traditional execution space. We show that a cost model can predict response time while accounting for the new aspects due to parallelism. We observe that the response time optimization metric violates a fundamental assumption in the dynamic programming algorithm that is the linchpin in the optimizers of most commercial DBMSs. We extend dynamic programming and show how optimization metrics which correctly predict response time may be designed.


international conference on management of data | 1991

Language features for interoperability of databases with schematic discrepancies

Ravi Krishnamurthy; Witold Litwin; William Kent

Present relational language capabilities are insufficient to provide interoperability of databases even if they are all relational. In particular, unified multidatabaae view definitions cannot reconcile schematic discrepancies, where data in one database correspond to metadata of another. We claim that following new features are necessary: 1. Higher order expressions where variables can range over data and metadata, including database names. 2. Higher order (multidatabase) view definitions, where the number of relations or of attributes defined, is dependent on the state of the database(s). 3. Complete view updatability for the users of multidatabase views. We propose these features in the context of a Horn clause based language, called Interoperable Database Language, (IDL).


international conference on data engineering | 1995

Optimizing queries with materialized views

Surajit Chaudhuri; Ravi Krishnamurthy; Spyros Potamianos; Kyuseok Shim

While much work has addressed the problem of maintaining materialized views, the important question of optimizing queries in the presence of materialised views has not been resolved. In this paper, we analyze the optimization question and provide a comprehensive and efficient solution. Our solution has the desirable property that it is a simple generalization of the traditional query optimization algorithm.<<ETX>>


ACM Transactions on Database Systems | 1990

Query optimization in a memory-resident domain relational calculus database system

Kyu-Young Whang; Ravi Krishnamurthy

We present techniques for optimizing queries in memory-resident database systems. Optimization techniques in memory-resident database systems differ significantly from those in conventional disk-resident database systems. In this paper we address the following aspects of query optimization in such systems and present specific solutions for them: (1) a new approach to developing a CPU-intensive cost model; (2) new optimization strategies for main-memory query processing; (3) new insight into join algorithms and access structures that take advantage of memory residency of data; and (4) the effect of the operating systems scheduling algorithm on the memory-residency assumption. We present an interesting result that a major cost of processing queries in memory-resident database systems is incurred by evaluation of predicates. We discuss optimization techniques using the Office-by-Example (OBE) that has been under development at IBM Research. We also present the results of performance measurements, which prove to be excellent in the current state of the art. Despite recent work on memory-resident database systems, query optimization aspects in these systems have not been well studied. We believe this paper opens the issues of query optimization in memory-resident database systems and presents practical solutions to them.


ACM Transactions on Information Systems | 1987

Office-by-example: an integrated office system and database manager

Kyu-Young Whang; Arthur C. Ammann; Anthony Bolmarcich; Maria Hanrahan; Guy Hochgesang; Kuan-Tsae Huang; Al Khorasani; Ravi Krishnamurthy; Gary H. Sockut; Paula Sweeney; Vance E. Waddle; Moshé M. Zloof

Office-by-Example (OBE) is an integrated office information system that has been under development at IBM Research. OBE, an extension of Query-by-Example, supports various office features such as database tables, word processing, electronic mail, graphics, images, and so forth. These seemingly heterogeneous features are integrated through a language feature called example elements. Applications involving example elements are processed by the database manager, an integrated part of the OBE system. In this paper we describe the facilities and architecture of the OBE system and discuss the techniques for integrating heterogeneous objects.


Journal of Computer and System Sciences | 1996

A Framework for Testing Safety and Effective Computability

Ravi Krishnamurthy; Raghu Ramakrishnan; Oded Shmueli

This paper presents a methodology for testing a logic program containing function symbols and built-in predicates forsafetyandeffective computability. Safety is the property that the set of answers for a given query is finite. A related issue is whether the evaluation strategy can effectively compute all answers and terminate. We consider these problems under the assumption that queries are evaluated using a fair bottom-up fixpoint computation. We also model the use of function symbols, to construct complex terms such as lists, and arithmetic operators, by considerating Datalog programs with infinite base relations over whichfiniteness constraintsandmonotonicity constraintsare imposed. One of the main results of this paper is a recursive algorithm,check?clique, to test the safety and effective computability of predicates in arbitrarily complex cliques in the predicate connection graph. This algorithm takes certain procedures as parameters, and its applicability can be strengthened by making these procedures more sophisticated. We specify the properties required of these procedures precisely, and present a formal proof of correctness for the algorithmcheck?clique. This work can be seen as providing a framework for testing safety and effective computability of recursive programs, in some ways analogous to thecapture rulesframework of Ullman. A second important contribution is a framework for analyzing programs that are produced by theMagic Setstransformation utilizingcheck?cliqueto analyze recursive cliques. The transformed program unfortunately often has a clique structure that combines several cliques of the original program. Given the complexity of algorithmcheck?clique, it is important to keep cliques as small as possible. We deal with this problem by considering cliques in an intermediate program, called theadorned program, produced by the Magic Sets transformation. The clique structure of the adorned program is similar to that of the original program, and by showing how to analyze the transformed program in terms of the cliques in the adorned program, we avoid the potentially expensive analysis of the cliques in the transformed program.


advanced information management and service | 1991

Interoperability of heterogeneous databases with schematic discrepancies

Ravi Krishnamurthy; Witold Litwin; William Kent

It is widely accepted that relational language capabilities are insufficient to prove interoperability of databases even if they are all relational. In particular, view definitions cannot reconcile schematic discrepancy, in which one databases data (values) correspond to metadata of another database. Two new features are necessary: higher order expressions are needed where variables can range over data and metadata, and these expressions can be used to define a unified view over the original databases; higher order view definitions are necessary where the number of relations/attributes defined are dependent on the state of the database, in contrast to the traditional view definitions which specify a fixed set of relations for all states of the database.<<ETX>>


international conference on data engineering | 1995

RBE: Rendering by example

Ravi Krishnamurthy; Moshé M. Zloof

Rendering is defined to be a customized presentation of data in such a way that allows users to subsequently interact with the presented data. Traditionally such a user interface would be a custom application written using conventional programming languages; in contrast we propose an application-independent, declarative (i.e., what-you-want) language that we call Rendering By Example, RBE, with the capability to specify a wide variety of renderings. RBE is a domain calculus language over user interface widgets. Most previous domain calculus database languages (e.g., QBE, LDL, Datalog) mainly addressed the data processing problem. The main contribution in developing RBE is to model semantics of user interactions in a declarative way. This declarative specification not only allows quick and ad-hoc specification of renderings (i.e., user interfaces) but also provides a framework to understand renderings as an abstract concept, independent of the application. Further, such a linguistic abstraction provides the basis for user-interface research. RBE is part of the ICBE language that is being prototyped in the Picture Programming project at HP Labs.<<ETX>>


international conference on management of data | 1996

Is GUI programming a database research problem

Nita Goyal; Charles Hoch; Ravi Krishnamurthy; Brian Meckler; Michael Suckow

Programming nontrivial GUI applications is currently an arduous task. Just as the use of a declarative language simplified the programming of database applications, we ask whether we can do the same for GUI programming? Can we then import a large body of knowledge from database research? We answer these questions by describing our experience in building nontrivial GUI applications initially using C++ programming and subsequently using Logic++, a higher order Horn clause logic language on complex objects with object-oriented features. We abstract a GUI application as a set of event handlers. Each event handler can be conceptualized as a transition from the old screen/program state to a new screen/program state. We use a data centric view of the screen/program state (i.e., every entity on the screen corresponds to proxy datum in the program) and express each event handler as a query dependent update, albeit a complicated one. To express such complicated updates we use Logic++. The proxy data are expressed as derived views that are materialized on the screen. Therefore, the system must be active in maintaining these materialized views. Consequently, each event handler is conceptually an update followed by a fixpoint computation of the proxy data. Based on our experience in building the GUI system, we observe that many database techniques such as view maintenance, active DB, concurrency control, recovery, optimization as well as language concepts such as higher order logic are useful in the context of GUI programming.


POS | 1993

PIL: An Optimizable Functional Language for Data Intensive Applications

Waqar Hasan; Ravi Krishnamurthy

The Papyrus Interface Language (PIL) has the design goal of providing optimizable and parallelizable language features. Analogous to the design philosophy of RISC instruction sets, the design of PIL is motivated by the desire to exploit query optimization and parallelization techniques. In contrast, most proposals of database programming language provide features to directly match user needs irrespective of the implementation problems, analogous to the CISC instruction set proposals. We have combined a functional model of computation with data types suitable for data intensive applications. A functional model gives a declarative semantics to all expressions including “procedural” constructs such as if-then-else, while and function calls provided the expression is without side-effects. We also provide specialized constructs for iteration over bags and sequences in order to facilitate optimization. The semantics of data types and computational abstractions are carefully chosen to retain the capability to parallelize programs. We have chosen to only partially define the order of evaluation for programs. This opens up more opportunities for reordering and for parallel execution. Just as in RISC instruction sets — what is not included will be a factor in dictating the performance of the system — we argue the need to exclude features such as object identity, semantic types an d inheritance that are popularly included in most database programming languages.

Collaboration


Dive into the Ravi Krishnamurthy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge