Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patricia G. Selinger is active.

Publication


Featured researches published by Patricia G. Selinger.


Communications of The ACM | 1981

A history and evaluation of System R

Donald D. Chamberlin; Morton M. Astrahan; Michael W. Blasgen; Jim Gray; W. Frank King; Bruce G. Lindsay; Raymond A. Lorie; James W. Mehl; Thomas G. Price; Franco Putzolu; Patricia G. Selinger; Mario Schkolnick; Donald R. Slutz; Irving L. Traiger; Bradford W. Wade; Robert A. Yost

System R, an experimental database system, was constructed to demonstrate that the usability advantages of the relational data model can be realized in a system with the complete function and high performance required for everyday production use. This paper describes the three principal phases of the System R project and discusses some of the lessons learned from System R about the design of relational systems and database systems in general.


Query Processing in Database Systems | 1985

Query Processing in R

Guy M. Lohman; C. Mohan; Laura M. Haas; Dean Daniels; Bruce G. Lindsay; Patricia G. Selinger; Paul F. Wilms

This chapter describes how statements in the SQL language are processed by the R* distributed relational database management system. R* is an experimental adaptation of System R to the distributed environment. The R* prototype is currently operational on multiple machines running the MVS operating system, and is undergoing evaluation. The R* system is a confederation of autonomous, locally-administered databases that may be geographically dispersed, yet which appear to the user as a single database. Naming conventions permit R* to access tables at remote sites without resorting to a centralized or replicated catalog, and without the user having to specify either the current location of or the communication commands required to access that table. SQL data definition statements affecting remote sites are interpreted through a distributed recursive call mechanism. Tables may be moved physically to other databases without affecting existing SQL statements. SQL data manipulation statements are compiled at each site having a table referenced in the statement, coordinated by the site at which the statement originated. As part of compilation, the distributed optimization process chooses the best place and the best way to access tables and join them together. Optimization uses dynamic programming and careful pruning to minimize total estimated execution cost at all sites, which is a linear combination of CPU, I/O, and communications (both per-message and per-byte) costs.


Information Sciences | 1983

Site autonomy issues in R∗: A distributed database management system☆

Patricia G. Selinger; Dean Daniels; Laura M. Haas; Bruce G. Lindsay; Pui Ng; Paul F. Wilms; Robert A. Yost

Abstract A distributed database management system (DDBMS) must simplify the users task of defining applications which manipulate shared data stored at multiple computing sites. To this end, the DDBMS must support transparent access to remote data. That is, any operation allowed on local data should also be possible on remote data. At the same time, because different computing sites are controlled by different individuals or organizations, the DDBMS must preserve each sites autonomy over its own data. This paper discusses some of the issues raised in the implementation of a DDBMS by the requirements of site autonomy. The issues will be discussed from the perspective of the R ∗ research project at IBMs San Jose Research Lab.


international conference on parallel and distributed information systems | 1991

A distributed catalog for heterogeneous distributed database resources

David M. Choy; Patricia G. Selinger

To support a distributed, heterogeneous computing environment, an inter-system catalog protocol is needed so that remote resources can be located, used, and maintained with little human intervention. This paper describes a scalable catalog framework, which is an extension of previous work in a distributed relational DBMS research prototype called R*. This work builds on the R* concepts to accommodate heterogeneity, to handle partitioned and replicated data, to support non-DBMS resource managers, and to enhance catalog access performance and system extensibility.<<ETX>>


social computing behavioral modeling and prediction | 2010

Social factors in creating an integrated capability for health system modeling and simulation

Paul P. Maglio; Melissa Cefkin; Peter J. Haas; Patricia G. Selinger

The health system is a complex system of systems – changes in agriculture, transportation, economics, family life, medical practices, and many other things can have a profound influence on health and health costs. Yet today, policy-level investment decisions are frequently made by modeling individual systems in isolation. We describe two sets of issues that we face in trying to develop a platform, method, and service for integrating expert models from different domains to support health policy and investment decisions. The first set of questions concerns how to develop accurate social and behavioral health models and integrate them with engineering models of transportation, clinic operations, and so forth. The second set of questions concerns the design of an environment that will encourage and facilitate collaboration between the health modelers themselves, who come from a wide variety of disciplines.


international conference on management of data | 1983

I wish I were over there: distributed execution protocols for data definition in R

Paul F. Wilms; Bruce G. Lindsay; Patricia G. Selinger

The design and implementation of R*, an experimental prototype of a distributed system for the management of interrelated, voluntarily cooperating, but also autonomous databases is based on several major objectives: site autonomy, transparency, ease of use and performance. This paper discusses the way data definition and control statements are executed in a distributed environment and shows how the general objectives are fulfilled. Specialized distributed execution facilities have been developed to facilitate the implementation of complex multi-site functions. This paper describes the facilities and methodology used to implement the distributed processing needed to perform multi-site data definition operations in R*.


Proceedings of the International Symposium on Database Systems of the 90s | 1990

The Impact of Hardware on Database Systems

Patricia G. Selinger

Relational database management systems translate queries posed in a non-procedural language into an efficiently executable plan. The plan consists of primitive database operators that use underlying database management system facilities or exploit the capabilities of the underlying operating system and hardware platform. The high level of the users queries provides a substantial opportunity for hardware to increase the response time or throughput capacity of the DBMS. Such speedup can be derived from special devices for specific functions, such as sorting, or from entirely new architectures that apply more MIPS to the same data, such as database machines.


international conference on data engineering | 2005

Top five data challenges for the next decade

Patricia G. Selinger

Summary form only given. For the past three decades, those of us in the database field have principally focused attention and significant effort on technology for storing, querying, accessing, and securing data with well-known structure in high-performance data management systems. Our focus has been riveted on performance, performance, and performance. The world has changed dramatically since the early days of database management systems, however. The dynamics of DBMS have changed, moving from supporting back office systems, to front offices, to Web-based systems. Computer architecture has changed, as we have moved from systems where 256 K was a large size memory, disk drives were very expensive, most processors were water-cooled, and screens glowed with green characters. Not only has more data been produced in the last several years than in all previous history, but also more and more of it is in digital form. Accelerated pace and higher expectations driven by competitive necessity have changed the nature of database solutions from well-understood batch-oriented processing to real-time, ad hoc query on continuously streaming data. All these changes are coming together to create a new set of challenges for the decade ahead. The author discusses the top five of these challenges in the database area.


allerton conference on communication, control, and computing | 2012

Splash: Simulation optimization in complex systems of systems

Peter J. Haas; Nicole C. Barberis; Piyaphol Phoungphol; Ignacio G. Terrizzano; Wang Chiew Tan; Patricia G. Selinger; Paul P. Maglio

Decision-makers increasingly need to bring together multiple models across a broad range of disciplines to guide investment and policy decisions around highly complex issues such as population health and safety. We discuss the use of the Smarter Planet Platform for Analysis Simulation of Health (Splash) for cross-disciplinary modeling, simulation, sensitivity analysis, and optimization in the setting of complex systems of systems. Splash is a prototype system that allows combination of existing heterogeneous simulation models and datasets to create composite simulation models of complex systems. Splash, built on a combination of data-integration, workflow management, and simulation technologies, facilitates loose coupling of models via data exchange. We describe the various components of Splash, with an emphasis on the experiment-management component. This latter component uses user-supplied metadata about models and datasets to provide, via an interactive GUI, a unified view over all of the parameters in all of the component models that make up a composite model, a mechanism for selecting the factors to vary, and a means for allowing users to easily specify experimental designs for the selected factors. The experiment manager also provides a mechanism for systematically varying the inputs to the composite models. We show how the experiment manager can be used to implement some simple stochastic-optimization functionality by implementing the Rinott procedure for selecting the best system. We also implement a sensitivity-analysis method based on a fractional-factorial experimental design. We demonstrate this technology via a composite model comprising a financial-rate model and a healthcare payer model.


symposium on principles of database systems | 1987

Chickens and eggs—the interrelationship of systems and theory

Patricia G. Selinger

This paper describes a personal perspective of the kinds of contributions that systems research and theoretical research make to one another particularly in the database area. Examples of each kind of contribution are given, and then several case studies from the author a personal experience are presented. The case studies illustrate database systems research where theoretical work contributed to systems results and vice versa. Areas of database systems which need more contributions from the theoretical community will also be presented.

Researchain Logo
Decentralizing Knowledge