Sunil K. Sarin
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sunil K. Sarin.
international conference on management of data | 1988
Umeshwar Dayal; Barbara T. Blaustein; Alejandro P. Buchmann; Upen S. Chakravarthy; Meichun Hsu; R. Ledin; Dennis R. McCarthy; Arnon Rosenthal; Sunil K. Sarin; Michael J. Carey; Miron Livny; Rajiv Jauhari
The HiPAC (High Performance ACtive database system) project addresses two critical problems in time-constrained data management: the handling of timing constraints in databases, and the avoidance of wasteful polling through the use of situation-action rules that are an integral part of the database and are monitored by DBMSs condition monitor. A rich knowledge model provides the necessary primitives for definition of timing constraints, situation-action rules, and precipitating events. The execution model allows various coupling modes between transactions, situation evaluations and actions, and provides the framework for correct concurrent execution of transactions and triggered actions. Different approaches to scheduling of time-constrained tasks and transactions are explored and an architecture is being designed with special emphasis on the interaction of the time-constrained, active DBMS and the operating system. Performance models are developed to evaluate the various design alternatives.
international conference on management of data | 1978
Michael Hammer; Sunil K. Sarin
A principal impediment to the use of declarative assertions for monitoring the state of a dynamic database is the high cost of conventional implementation techniques for such a facility. This paper presents a means of efficiently detecting violations of assertions caused by updates to a database. Our technique is based on the premise that the structure of updates to a database can generally be anticipated, and that an analysis of the potential effect that an update may have on an assertion can enable the assertion to be efficiently tested when the update is performed.This analysis is performed by a compile-time assertion processor; for each type of update operation defined on the database, the assertion processor synthesizes a procedure that will be used to evaluate a set of given assertions whenever an operation of the given type is performed on the database. For each assertion and operation, the assertion processor performs a detailed logical analysis, called perturbation analysis, of the effect that the operation may have on the assertion. Perturbation analysis identifies conditions that can be efficiently tested at run-time (when an operation of the given type is performed) and that minimize the extent to which the assertion must be fully reevaluated; the identified conditions also enable the assertion to be tested before the update is actually performed, thereby avoiding the need for expensive back-out procedures in the case that the assertion is found to be violated. Based on this analysis, the assertion processor generates a set of alternative efficient means of determining whether or not execution of the operation causes the assertion to be violated. A database transaction processor, which estimates the performance cost of each of the alternatives in the context of the physical representation and access methods of the database, can then be used to identify the least expensive means of testing the assertion.This work has been done in the particular context of semantic integrity assertions, but it readily extends to related problems of database monitoring. The efficiency of testing that can be achieved through the use of our assertion processing technique in comparable with that attainable through the use of hand-coded procedures. The technique therefore supports all the advantages of the declarative approach to database assertion-monitoring, while retaining the level of efficiency that is usually associated with procedural methods.
ACM Sigoa Newsletter | 1984
Sunil K. Sarin; Irene Greif
A layered architecture for the implementation of real-time conferences is presented. In a real-time conference a group of users each at his or her own workstation, share identical views of on-line application information. The users cooperate in a problem solving task by interactively modifying or editing the shared view or the underlying information, and can use a voice communication channel for discussion and negotiation. The lower layer in this architecture, named Ensemble, supports the sharing of arbitrary application-defined objects among the participants of a conference, and the manipulation of these objects via one or more application-defined groups of commands called activities. Ensemble provides generic facilities for sharing objects and activities, and for dynamically adding and removing participants in a conference; these can be used in constructing real-time conferencing systems for many different applications. An example is presented of how the Ensemble functions can be used to implement a shared bitmap with independent participant cursors. The relation between this layered architecture and the ISO Open Systems Interconnection reference model is discussed. In particular, it is argued that Ensemble represents a plausible first step toward a Session-layer protocol for “multi-endpoint connections”, a neglected area of communication protocol development.
IEEE Transactions on Computers | 1985
Sunil K. Sarin; Barbara T. Blaustein; Charles W. Kaufman
An overview is presented of an approach to distributed database design which emphasizes high availability in the face of network partitions and other communication failures. This approach is appropriate for applications which require continued operation and can tolerate some loss of integrity of the data. Each site presents its users and application programs with the best possible view of the data which it can, based on those updates which it has received so far. Mutual consistency of replicated copies of data is ensured by using time stamps to establish a known total ordering on all updates issued, and by a mechanism which ensures the same final result regardless of the order in which a site actually receives these updates. A mechanism is proposed, based on alerters and triggers, by which applications can deal with exception conditions which may arise as a consequence of the high-availability architecture. A prototype system which demonstrates this approach is near completion.
IEEE Data(base) Engineering Bulletin | 1993
Dennis R. McCarthy; Sunil K. Sarin
conference on computer supported cooperative work | 1986
Irene Greif; Sunil K. Sarin
Computer Supported Cooperative Work | 1988
Sunil K. Sarin; Irene Greif
very large data bases | 1986
Sunil K. Sarin; Charles W. Kaufman; Janet E. Somers
international conference on management of data | 1978
Michael Hammer; Sunil K. Sarin
conference on computer supported cooperative work | 1988
Sunil K. Sarin; Irene Greif