Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henry F. Korth is active.

Publication


Featured researches published by Henry F. Korth.


international conference on management of data | 1987

Semantics and implementation of schema evolution in object-oriented databases

Jay Banerjee; Won Kim; Hyoung-Joo Kim; Henry F. Korth

Object-oriented programming is well-suited to such data-intensive application domains as CAD/CAM, AI, and OIS (office information systems) with multimedia documents. At MCC we have built a prototype object-oriented database system, called ORION. It adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. One of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. In this paper, following a brief review of the object-oriented data model that we support in ORION, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation.


international conference on management of data | 2002

Covering indexes for branching path queries

Raghav Kaushik; Philip Bohannon; Jeffrey F. Naughton; Henry F. Korth

In this paper, we ask if the traditional relational query acceleration techniques of summary tables and covering indexes have analogs for branching path expression queries over tree- or graph-structured XML data. Our answer is yes --- the forward-and-backward index already proposed in the literature can be viewed as a structure analogous to a summary table or covering index. We also show that it is the smallest such index that covers all branching path expression queries. While this index is very general, our experiments show that it can be so large in practice as to offer little performance improvement over evaluating queries directly on the data. Likening the forward-and-backward index to a covering index on all the attributes of several tables, we devise an index definition scheme to restrict the class of branching path expressions being indexed. The resulting index structures are dramatically smaller and perform better than the full forward-and-backward index for these classes of branching path expressions. This is roughly analogous to the situation in multidimensional or OLAP workloads, in which more highly aggregated summary tables can service a smaller subset of queries but can do so at increased performance. We evaluate the performance of our indexes on both relational decompositions of XML and a native storage technique. As expected, the performance benefit of an index is maximized when the query matches the index definition.


international conference on management of data | 1993

Database system issues in nomadic computing

Rafael Alonso; Henry F. Korth

Mobile computers and wireless networks are emerging technologies that will soon be available to a wide variety of computer users. Unlike earlier generations of laptop computers, the new generation of mobile computers can be an integrated part of a distributed computing environment, one in which users change physical location frequently. The result is a new computing paradigm, nomadic computing. This paradigm will affect the design of much of our current systems software, including that of database systems. This paper discusses in some detail the impact of nomadic computing on a number of traditional database system concepts. In particular, we point out how the reliance on short-lived batteries changes the cost assumptions underlying query processing. In these systems, power consumption competes with resource utilization in the definition of cost metrics. We also discuss how the likelihood of temporary disconnection forces consideration of alternative transaction processing protocols. The limited screen space of mobile computers along with the advent of pen-based computing provides new opportunities and new constraints on database interfaces and languages. Lastly, we believe that the movement of computers and data among networks potentially belonging to distinct, autonomous organizations creates serious security problems.


international conference on distributed computing systems | 1992

A transaction model for multidatabase systems

Sharad Mehrotra; Rajeev Rastogi; Avi Silberschatz; Henry F. Korth

A transaction model for multidatabase system (MDBS) applications in which global subtransactions may be either compensatable or retriable is presented. In this model compensation and retrying are used for recovery purposes. However, since such executions may no longer consist of atomic transactions, a correctness criterion that ensures that transactions see consistent database states is necessary. A commit protocol and a concurrency control scheme that ensures that all generated schedules are correct are also presented. The commit protocol eliminates the problem of blocking, which is characteristics of the standard 2PC protocol. The concurrency control protocol can be used in any MDBS environment irrespective of the concurrency control protocol followed by the local DBMSs in order to ensure serializability.<<ETX>>


Information Systems | 1987

SQL/NF: a query language for ¬ 1NF relational databases

Mark A. Roth; Henry F. Korth; Don S. Batory

Abstract There is growing interest in abandoning the first-normal-form assumption on which the relational database model is based. This interest has developed from a desire to extend the applicability of the relational model beyond traditional data-processing application. In this paper, we extend one of the most widely used relational query languages, SQL, to operate on non-first-normal-form relations. In this framework, well allow attributes to be relation-valued as well as atomic-valued (e.g. integer or character). A relation which occurs as the value of an attribute in a tuple of another relation is said to be nested. Our extended language, called SQL/NF, includes all of the power of standard SQL as well as the ability to define nested relations in the data definition language and query these relations directly in the extended data manipulation language. A variety of improvements are made to SQL; the syntax is simplified and useless constructs and arbitrary restrictions are removed.


symposium on principles of database systems | 1997

Replication and consistency: being lazy helps sometimes

Yuri Breitbart; Henry F. Korth

The issue of data replication is considered in the context of a restricted system model motivated by certain distributed data-warehousing applications. A new replica management protocol is defined for this model in which gIobaI serializability is ensured, while message overhead and deadlock frequency are less than in previously published work. The advantages of the protocol arise from its use of a lazy approach to update of secondary copies of replicated data and the use of a new concept, virtual sites, to reduce the potential for conflict among global transactions.


international conference on management of data | 1998

Replication, consistency, and practicality: are these mutually exclusive?

Todd A. Anderson; Yuri Breitbart; Henry F. Korth; Avishai Wool

Previous papers have postulated that traditional schemes for the management of replicated data are doomed to failure in practice due to a quartic (or worse) explosion in the probability of deadlocks. In this paper, we present results of a simulation study for three recently introduced protocols that guarantee global serializability and transaction atomicity without resorting to the two-phase commit protocol. The protocols analyzed in this paper include a global locking protocol [10], a “pessimistic” protocol based on a replication graph [5], and an “optimistic” protocol based on a replication graph [7]. The results of the study show a wide range of practical applicability for the lazy replica-update approach employed in these protocols. We show that under reasonable contention conditions and sufficiently high transaction rate, both replication-graph-based protocols outperform the global locking protocol. The distinctions among the protocols in terms of performance are significant. For example, an offered load where 70% - 80% of transactions under the global locking protocol were aborted, only 10% of transactions were aborted under the protocols based on the replication graph. The results of the study suggest that protocols based on a replication graph offer practical techniques for replica management. However, it also shows that performance deteriorates rapidly and dramatically when transaction throughput reaches a saturation point.


ACM Transactions on Database Systems | 1984

SYSTEM/U: a database system based on the universal relation assumption

Henry F. Korth; Gabriel M. Kuper; Joan Feigenbaum; Allen Van Gelder; Jeffrey D. Ullman

System/U is a universal relation database system under development at Standford University which uses the language C on UNIX. The system is intended to test the use of the universal view, in which the entire database is seen as one relation. This paper describes the theory behind System/U, in particular the theory of maximal objects and the connection between a set of attributes. We also describe the implementation of the DDL (Data Description Language) and the DML (Data Manipulation Language), and discuss in detail how the DDL finds maximal objects and how the DML determines the connection between the attributes that appear in a query.


international conference on management of data | 1988

Formal model of correctness without serializabilty

Henry F. Korth; Gregory D. Speegle

In the classical approach to transaction processing, a concurrent execution is considered to be correct if it is equivalent to a non-concurrent schedule. This notion of correctness is called serializability. Serializability has proven to be a highly useful concept for transaction systems for data-processing style applications. Recent interest in applying database concepts to applications in computer-aided design, office information systems, etc. has resulted in transactions of relatively long duration. For such transactions, there are serious consequences to requiring serializability as the notion of correctness. Specifically, such systems either impose long-duration waits or require the abortion of long transactions. In this paper, we define a transaction model that allows for several alternative notions of correctness without the requirement of serializability. After introducing the model, we investigate classes of schedules for transactions. We show that these classes are richer than analogous classes under the classical model. Finally, we show the potential practicality of our model by describing protocols that permit a transaction manager to allow correct non-serializable executions


international conference on management of data | 1991

An optimistic commit protocol for distributed transaction management

Eliezer Levy; Henry F. Korth; Abraham Silberschatz

A major disadvantage of the two-phase commit (2PC) protocol is the potential unbounded delay that transactions may have to endure if certain failures occur. By combining a novel use of compensating transactions along with an optimistic assumption, we propose a revised 2PC protocol that overcomes these difficulties. In the revised protocol, locks are released when a site votes to commit a transaction, thereby solving the indefinite blocking problem of 2PC. Semantic, rather than standard, atomicity is guaranteed as the effects of a transaction that is finally aborted are undone semantically by a compensating transaction. Relaxing standard atomicity interacts in a subtle way with correctness and concurrency control issues. Accordingly, a correctness criterion is proposed that is most appropriate when atomicity is given up for semantic atomicity. The correctness criterion reduces to serializability when no global transactions are aborted, and excludes unacceptable executions when global transactions do fail. We devise a family of practical protocols that ensure this correctness notion. These protocols restrict only global transactions, and do not incur extra messages other than the standard 2PC messages. The results are of particular importance for multidatabase systems.

Collaboration


Dive into the Henry F. Korth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eliezer Levy

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Mark A. Roth

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge