Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Margaret H. Eich is active.

Publication


Featured researches published by Margaret H. Eich.


ACM Computing Surveys | 1992

Join processing in relational databases

Priti Mishra; Margaret H. Eich

The join operation is one of the fundamental relational database query operations. It facilitates the retrieval of information from two different relations based on a Cartesian product of the two relations. The join is one of the most diffidult operations to implement efficiently, as no predefined links between relations are required to exist (as they are with network and hierarchical systems). The join is the only relational algebra operation that allows the combining of related tuples from relations on different attribute schemes. Since it is executed frequently and is expensive, much research effort has been applied to the optimization of join processing. In this paper, the different kinds of joins and the various implementation techniques are surveyed. These different methods are classified based on how they partition tuples from different relations. Some require that all tuples from one be compared to all tuples from another; other algorithms only compare some tuples from each. In addition, some techniques perform an explicit partitioning, whereas others are implicit.


IWDM | 1988

Mars: The Design of a Main Memory Database Machine

Margaret H. Eich

The initial design of a main memory database (MMDB) backend database machine (DBM) is described. This MAin memory Recoverable database with Stable log (MARS) is designed to provide quick recovery after transaction, system, or media failure, and to also provide efficient transaction processing.


international conference on data engineering | 1987

A classification and comparison of main memory database recovery techniques

Margaret H. Eich

The declining cost of main memory and need for high performance database systems have recently inspired research into systems with massive amounts of memory and the ability to store complete databases in main memory It is recognized that in this framework the issues concerned with efficient database recovery are more complex than in traditional DBMS systems. In this paper we define a classification method for main memory database (MMDB) recovery techniques and discuss the results of a performance analysis conducted examining the different classes.


IWDM '89 Proceedings of the Sixth International Workshop on Database Machines | 1989

Main Memory Database Research Directions

Margaret H. Eich

The state of MMDB research with respect to some of the many unsolved problems is investigated. For MMDB systems to realize their full performance potential, the issues raised here must be addressed. We hope that this discussion will increase interest in main memory systems and stimulate future research activities.


international conference on management of data | 1991

MMDB reload algorithms

Le Gruenwald; Margaret H. Eich

In a mam memory database (WB), the primary copy of the database may be stored in a volatile memory When a crash occurs, a reload of the database from archive memory to main memory must be performed. It is essential that an efficient reload scheme be used to ensure that the expectations of high performance database systems are met. This implies that the overall performance measures of any potential reload algorithm should not be measured simply by reload time, but by its impact on overall system performance. This paper presents four different reload algorithms that aim at fast response time of transactions and high throughput of the overall system, Simulation studies comparing the algorithms indicate that the best overall approach is one based on frequency of access.


international conference on data engineering | 1993

Post-crash log processing for fuzzy checkpointing main memory databases

Xi Li; Margaret H. Eich

The impact of updating policy and access pattern on the performance of post-crash log processing with a fuzzy checkpointing main memory database (MMDB) is discussed. The problem of restoring the database to a consistent state and several algorithms for post-crash log processing under the various updating alternatives are reviewed. Using an analytic model, the checkpoint behavior and post-crash log processing performance of these algorithms are examined. Analytic results show that deferred updating always takes less time to process the log after a crash.<<ETX>>


ACM Transactions on Database Systems | 1988

Database concurrency control using data flow graphs

Margaret H. Eich; David L. Wells

A specialized data flow graph, Database Flow Graph (DBFG) is introduced. DBFGs may be used for scheduling database operations, particularly in an MIMD database machine environment. A DBFG explicitly maintains intertransaction and intratransaction dependencies, and is constructed from the Transaction Flow Graphs (TFG) of active transactions. A TFG, in turn, is the generalization of a query tree used, for example, in DIRECT [15]. All DBFG schedules are serializable and deadlock free. Operations needed to create and maintain the DBFG structure as transactions are added or removed from the system are discussed. Simulation results show that DBFG scheduling performs as well as two-phase locking.


IEEE Transactions on Software Engineering | 1988

Graph directed locking

Margaret H. Eich

This paper introduces a non-two-phase database concurrency control technique which is deadlock free, places no restrictions on the structure of the data, never requires data to be.reread, never forces a transaction to he rolled back in order to achieve serializability, applies a type of lock conversion, and allows items to be released to subsequent transactions as soon as possible. The method introduced, Database Flow Graph Locking (FGL), uses a DAG to direct the migration of locks between transactions. Unlike many previous non-twophase methods, the database need not be structured in any specific fashion. The effect of these changes are that, with the same serializable schedule, FGL obtains a higher degree of concurrency than 2PL. Overhead requirements for DBFG Locking are comparable to those for twophase locking, with 2PL being better in low conflict situations and FGL better in high conflict. Zndex Terms-Concurrency control, database systems, deadlocks, locking protocols, serializability, transactions.A non-two-phase database concurrency control technique is introduced. The technique is deadlock-free, places no restrictions on the structure of the data, never requires data to be reread, never forces a transaction to be rolled back in order to achieve serializability, applies a type of lock conversion, and allows items to be released to subsequent transactions as soon as possible. The method introduced, database flow graph locking (FGL), uses a directed acyclic graph to direct the migration of locks between transactions. Unlike many previous non-two-phase methods, the database need not be structured in any specific fashion. The effect of these changes is that, with the same serializable schedule, FGL obtains a higher degree of concurrency than two-phase locking (2PL). Overhead requirements for database flow graph locking are comparable to those for two-phase locking, with 2PL being better in low conflict situations and FGL better in high conflict. >


IEEE Transactions on Software Engineering | 1990

The performance of flow graph locking

Margaret H. Eich; Sharon M. Garard

The performance of flow graph locking (FGL) is compared with that of two-phase locking (2PL). As the data sharing level increases, FGL has a better response time than 2PL. Regardless of the data sharing or multiprogramming levels, FGL usually facilitates a better throughput rate than 2PL. >


symposium on applied computing | 1990

Reload in a main memory database system: MARS

Le Gruenwald; Margaret H. Eich

The authors introduce two different algorithms to perform a complete reload of data from secondary storage (AM) into main memory (MM) when a system crash occurs: ordered reload with prioritization and smart reload. The first algorithm uses a cylinder as its reload granularity and does not take the access frequency into consideration. The second algorithm uses a block as its reload granularity and makes use of access frequency. Both algorithms allow the system to be brought online before the entire database is reloaded and implement the same priority reload scheme: the highest priority is given to data needed by executing transactions, the second highest priority to data needed by waiting transactions, and the last priority to the remaining data. Reload of data of lower priority is preempted by reload of data of higher priority to achieve faster system response time.<<ETX>>

Collaboration


Dive into the Margaret H. Eich's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Priti Mishra

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Chin-Feng Fan

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Chris H. Corti

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

David L. Wells

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Francisco Mariategui

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Sharon M. Garard

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Sohail Rafiqi

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Wei-Li Sun

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Xi Li

Southern Methodist University

View shared research outputs
Researchain Logo
Decentralizing Knowledge