Arthur M. Keller
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arthur M. Keller.
international conference on management of data | 1997
Michael R. Genesereth; Arthur M. Keller; Oliver M. Duschka
Infomaster is an information integration system that provides integrated access to multiple distributed heterogeneous information sources on the Internet, thus giving the illusion of a centralized, homogeneous information system. We say that Infomaster creates a virtual data warehouse. The core of Infomaster is a facilitator that dynamically determines an efficient way to answer the users query using as few sources as necessary and harmonizes the heterogeneities among these sources. Infomaster handles both structural and content translation to resolve differences between multiple data sources and the multiple applications for the collected data. Infomaster connects to a variety of databases using wrappers, such as for Z39.50, SQL databases through ODBC, EDI transactions, and other World Wide Web (WWW) sources. There are several WWW user interfaces to Infomaster, including forms based and textual. Infomaster also includes a programmatic interface and it can download results in structured form onto a client computer. Infomaster has been in production use for integrating rental housing advertisements from several newspapers (since fall 1995), and for meeting room scheduling (since winter 1996). Infomaster is also being used to integrate heterogeneous electronic product catalogs.
international conference on parallel and distributed information systems | 1994
Arthur M. Keller; Julie Basu
Abstract. We propose a new client-side data-caching scheme for relational databases with a central server and multiple clients. Data are loaded into each client cache based on queries executed on the central database at the server. These queries are used to form predicates that describe the cache contents. A subsequent query at the client may be satisfied in its local cache if we can determine that the query result is entirely contained in the cache. This issue is called cache completeness. A separate issue, cache currency, deals with the effect on client caches of updates committed at the central database. We examine the various performance tradeoffs and optimization issues involved in addressing the questions of cache currency and completeness using predicate descriptions and suggest solutions that promote good dynamic behavior. Lower query-response times, reduced message traffic, higher server throughput, and better scalability are some of the expected benefits of our approach over commonly used relational server-side and object ID-based or page-based client-side caching.
symposium on principles of database systems | 1985
Arthur M. Keller
We consider the problem of updating databases through views composed of selections? projections. and joins of a series of Boyce-Codd Normal Form relations. This involves translating updates expressed against the view to updates expressed against the database. We; present five criteria that these translations must satisfy. For each type of view update (insert. delete, replace). we provide a list of templates for translation into database updates that satisfy the five criteria. We show that there cannot be any other translations that satisfy the five criteria.
international conference on management of data | 1991
Thierry Barsalou; Niki Siambela; Arthur M. Keller; Gio Wiederhold
The view-object model provides a formal basis for representing and manipulating object-based views on relational databases. In this paper, we present a scheme for handling update operations on view objects. Because a typical view object encompasses multiple relations, a view-object update request must be translated into valid operations on the underlying relational database. Building on an existing approach to update relational views, we introduce algorithms to enumerate all valid translations of the various update operations on view objects. The process of choosing a translator for view-object update occurs at view-object generation time. Once chosen, the translator can handle any update request on the view object.
international conference on data engineering | 1995
Shailesh Agarwal; Arthur M. Keller; Gio Wiederhold; Krishna C. Saraswat
In this work we address the problem of dealing with data inconsistencies while integrating data sets derived from multiple autonomous relational databases. The fundamental assumption in the classical relational model is that data is consistent and hence no support is provided for dealing with inconsistent data. Due to this limitation of the classical relational model, the semantics for detecting, representing, and manipulating inconsistent data have to be explicitly encoded in the applications by the application developer. In this paper, we propose the flexible relational model, which extends the classical relational model by providing support for inconsistent data. We present a flexible relation algebra, which provides semantics for database operations in the presence of potentially inconsistent data. Finally, we discuss issues raised for query optimization when the data may be inconsistent.<<ETX>>
international conference on management of data | 1993
Arthur M. Keller; Richard Jensen; Shailesh Agarwal
Building object-oriented applications which access relational data introduces a number of technical issues for developers who are making the transition to C++. We describe these issues and discuss how we have addressed them in Persistence, an application development tool that uses an automatic code generator to merge C++ applications with relational data. We use client-side caching to provide the application program with efficient access to the data.
IEEE Transactions on Software Engineering | 1985
Arthur M. Keller; Marianne Winslett Wilkins
In this paper we consider approaches to updating databases containing null values and incomplete information. Our approach distinguishes between modeling incompletely known worlds and modeling changes in these worlds. As an alternative to the open and closed world assumptions, we propose the expanded closed world assumption. Under this assumption, we discuss how to perform updates on databases containing set nulls, marked nulls, and simple conditional tuples, and address some issues of refining incompletely specified information.
acm conference on hypertext | 1991
Yoshinori Hara; Arthur M. Keller; Gio Wiederhold
In order to combine hypertext with database facilities, we show how to extract an effective storage structure from given instance relationships. The schema of the structure recognizes clusters and exceptions. Extracting high-level structures is useful for providing a high performance browsing environment as well as efficient physical database design, especially when handling large amounts of data. This paper focuses on a clustering method, ACE, which generates aggregations and exceptions from the original graph structure in order to capture high level relationships. The problem of minimizing the cost function is NP-complete. We use a heuristic approach based on an extended Kernighan-Lin algorithm. We demonstrate our method on a hypertext application and on a standard random graph, compared with its analytical model. The storage reductions of input database size in main memory were 77.2% and 12.3%, respectively. It was also useful for secondary storage organization for efflcient retrieval.
Distributed and Parallel Databases | 1995
Stefano Ceri; Maurice A. W. Houtsma; Arthur M. Keller; Pierangela Samarati
Update propagation and transaction atomicity are major obstacles to the development of replicated databases. Many practical applications, such as automated teller machine networks, flight reservation, and part inventory control, do not require these properties. In this paper we present an approach for incrementally updating a distributed, replicated database without requiring multi-site atomic commit protocols. We prove that the mechanism is correct, as it asymptotically performs all the updates on all the copies. Our approach has two important characteristics: it is progressive, and non-blocking.Progressive means that the transactions coordinator always commits, possibly together with a group of other sites. The update is later propagated asynchronously to the remaining sites.Non-blocking means that each site can take unilateral decisions at each step of the algorithm. Sites which cannot commit updates are brought to the same final state by means of areconciliation mechanism. This mechanism uses the history logs, which are stored locally at each site, to bring sites to agreement. It requires a small auxiliary data structure, called reception vector, to keep track of the time unto which the other sites are guaranteed to be up-to-date. Several optimizations to the basic mechanism are also discussed.
Mobile Networks and Applications | 1997
Arthur M. Keller; Owen M. Densmore; Wei Huang; Behfar Razavi
This paper describes an approach for handling intermittent connectivity between mobile clients and network‐resident applications, which we call zippering. When the client connects with the application, communication between the client and the application is synchronous. When the client intermittently connects with the application, communication becomes asynchronous. The DIANA (Device‐Independent, Asynchronous Network Access) approach allows the client to perform a variety of operations while disconnected. Finally, when the client reconnects with the application, the operations performed independently on the client are replayed to the application in the order they were originally done. Zippering allows the user at the client to fix errors detected during reconciliation and continues the transaction gracefully instead of aborting the whole transaction when errors are detected.