Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara T. Blaustein is active.

Publication


Featured researches published by Barbara T. Blaustein.


international conference on management of data | 1988

The HiPAC project: combining active databases and timing constraints

Umeshwar Dayal; Barbara T. Blaustein; Alejandro P. Buchmann; Upen S. Chakravarthy; Meichun Hsu; R. Ledin; Dennis R. McCarthy; Arnon Rosenthal; Sunil K. Sarin; Michael J. Carey; Miron Livny; Rajiv Jauhari

The HiPAC (High Performance ACtive database system) project addresses two critical problems in time-constrained data management: the handling of timing constraints in databases, and the avoidance of wasteful polling through the use of situation-action rules that are an integral part of the database and are monitored by DBMSs condition monitor. A rich knowledge model provides the necessary primitives for definition of timing constraints, situation-action rules, and precipitating events. The execution model allows various coupling modes between transactions, situation evaluations and actions, and provides the framework for correct concurrent execution of transactions and triggered actions. Different approaches to scheduling of time-constrained tasks and transactions are explored and an architecture is being designed with special emphasis on the interaction of the time-constrained, active DBMS and the operating system. Performance models are developed to evaluate the various design alternatives.


ieee symposium on security and privacy | 1997

Surviving information warfare attacks on databases

Paul Ammann; Sushil Jajodia; Catherine D. McCollum; Barbara T. Blaustein

We consider the problem of surviving information warfare attacks on databases. We adopt a fault tolerance approach to the different phases of an attack. To maintain precise information about the attack, we mark data to reflect the severity of detected damage as well as the degree to which the damaged data has been repaired. In the case of partially repaired data, integrity constraints might be violated, but data is nonetheless available to support mission objectives. We define a notion of consistency suitable for databases in which some information is known to be damaged, and other information is known to be only partially repaired. We present a protocol for normal transactions with respect to the damage markings and show that consistency preserving normal transactions maintain database consistency in the presence of damage. We present an algorithm for taking consistent snapshots of databases under attack. The snapshot algorithm has the virtue of not interfering with countermeasure transactions.


ieee symposium on security and privacy | 1993

A model of atomicity for multilevel transactions

Barbara T. Blaustein; Sushil Jajodia; Catherine D. McCollum; LouAnna Notargiacomo

Data management applications that use multilevel database management system (DBMS) capabilities have the requirement to read and write objects at multiple levels within the bounds of a multilevel transaction. The authors define a new notion of atomicity that is meaningful within the constraints of the multilevel environment. They offer a model of multilevel atomicity that defines varying degrees of atomicity and recognizes that lower security level operations within a transaction must be able to commit or abort independently of higher security level operations. Execution graphs are provided as a tool for analyzing atomicity requirements in conjunction with internal semantic interdependencies among the operations of a transaction and rules for determining the greatest degree of atomicity are proved that can be attained for a given multilevel transaction. Several alternative transaction management algorithms that can be used to preserve multilevel atomicity are presented.<<ETX>>


IEEE Transactions on Knowledge and Data Engineering | 1996

Correctness criteria for multilevel secure transactions

Kenneth P. Smith; Barbara T. Blaustein; Sushil Jajodia; LouAnna Notargiacomo

The benefits of distributed systems and shared database resources are widely recognized, but they often cannot be exploited by users who must protect their data by using label-based access controls. In particular, users of label-based data need to read and write data at different security levels within a single database transaction, which is not currently possible without violating multilevel security constraints. The paper presents a formal model of multilevel transactions which provide this capability. We define four ACIS (atomicity, consistency, isolation, and security) correctness properties of multilevel transactions. While atomicity, consistency and isolation are mutually achievable in standard single-site and distributed transactions, we show that the security requirements of multilevel transactions conflict with some of these goals. This forces trade-offs to be made among the ACIS correctness properties, and we define appropriate partial correctness properties. Due to such trade-offs, an important problem is to design multilevel transaction execution protocols which achieve the greatest possible degree of correctness. These protocols must provide a variety of approaches to making trade-offs according to the differing priorities of various users. We present three transaction execution protocols which achieve a high degree of correctness. These protocols exemplify the correctness trade-offs proven in the paper, and offer realistic implementation options.


information reuse and integration | 2011

PLUS: A provenance manager for integrated information

Adriane Chapman; Barbara T. Blaustein; Len Seligman; M. David Allen

It can be difficult to fully understand the result of integrating information from diverse sources. When all the information comes from a single organization, there is a collective knowledge about where it came from and whether it can be trusted. Unfortunately, once information from multiple organizations is integrated, there is no longer a shared knowledge of the data and its quality. It is often impossible to view and judge the information from a different organization; when errors occur, notification does not always reach all users of the data. We describe how a multi-organizational provenance store that collects provenance from heterogeneous systems addresses these problems. Unlike most provenance systems, we cope with an open world, where the data usage is not determined in advance and can take place across many systems and organizations.


international provenance and annotation workshop | 2010

Capturing Provenance in the Wild

M. David Allen; Adriane Chapman; Barbara T. Blaustein; Len Seligman

All current provenance systems are “closed world” systems; provenance is collected within the confines of a well understood, pre-planned system. However, when users compose services from heterogeneous systems and organizations to form a new application, it is impossible to track the provenance in the new system using currently available work. In this work, we describe the ability to compose multiple provenance-unaware services in an “open world” system and still collect provenance information about their execution. Our approach is implemented using the PLUS provenance system and the open source MULE Enterprise Service Bus. Our evaluations show that this approach is scalable and has minimal overhead.


Future Generation Computer Systems | 2015

What do we do now? Workflows for an unpredictable world

M. David Allen; Adriane Chapman; Barbara T. Blaustein; Lisa Mak

Workflow systems permit organization of many individual subtasks into a cohesive whole, in order to accomplish a specific mission. For many government and business missions, these systems are used to manage repetitive processes, such as large data-processing and exploitation pipelines. Government missions with strong interactions with the real world are extremely dynamic, as are all missions dealing with error-prone or changing data streams. We contribute a vision for discovery of new steps in adaptive workflow systems, suitability functions that can discover candidate alternatives, and a way forward for sourcing options for decision-makers, without the strong assumptions required by previous work. As data-processing workflows are shared, the sharing entities may find that certain parts of the workflow must be adapted to the new environment of mission. Extremely dynamic environments call for capabilities that support agile operations and pipeline sharing by making it possible to choose relevant actions when a situation invalidates the assumptions of current execution. We adapt some work in schema matching towards this problem, citing key differences between the two sets of challenges.


international provenance and annotation workshop | 2014

Engineering Choices for Open World Provenance

M. David Allen; Adriane Chapman; Barbara T. Blaustein

This work outlines engineering decisions required to support a provenance system in an open world where systems are not under any common control and use many different technologies. Real U.S. government applications have shown us the need for specialized identity techniques, flexible storage, scalability testing, protection of sensitive information, and customizable provenance queries. We analyze tradeoffs for approaches to each area, focusing more on maintaining graph connectivity and breadth of capture, rather than on fine-grained/detailed capture as in other works. We implement each technique in the PLUS system, test its real-time efficiency, and describe the results.


ieee international conference on technologies for homeland security | 2009

Information interoperability and provenance for emergency preparedness and response

Len Seligman; Barbara T. Blaustein; Peter Mork; Kenneth P. Smith; Neal Rothleder

Improved situation awareness is a key enabler of better emergency preparedness and response (EP&R). This paper describes two important challenges: information interoperability and provenance. The former enables meaningful information exchange across separately developed systems, while the latter gives users context that helps them interpret shared information and make trust decisions. We present applied research in information interoperability and provenance, describe our collaborations with leading industrial and academic partners, and illustrate how the resulting tools improve information sharing during preparation, training/exercises, ongoing operations, and response.


Journal of Computer Security | 1995

Merging Models: Integrity, Dynamic Separation of Duty and Trusted Data Management

LouAnna Notargiacomo; Barbara T. Blaustein; Catherine D. McCollum

One of the most important responsibilities of a database management system (DBMS) is maintaining the integrity of data. Traditional database integrity mechanisms have evolved in DBMSs to fulfill this need, including transaction management to maintain consistent results when requests execute concurrently and explicitly asserted integrity constraints to limit the values deemed legal. DBMSs also provide access controls that limit who is permitted to modify data. Despite these controls, however, DBMSs are still vulnerable to integrity violations due to users modifying data in unexpected ways or abusing their access authorizations for fraudulent or malicious purposes. Recent work in generalized integrity models, such as the Clark-Wilson model [Clark 1987, Clark 1988] and separation of duty models [Sandhu 1988, Badger 1989], provides new approaches for addressing these additional integrity needs. This paper interprets the Clark-Wilson model in the context of a DBMS, in general, and of a trusted relational DBMS, in particular. It presents a layered policy for Clark-Wilson integrity and dynamic separation of duty, that can augment the conventional database integrity capabilities of a commercial trusted DBMS and can coexist with its existing policies. Building on existing models, our dynamic separation of duty model defines a general control structure and dynamic authorization capabilities. Clark-Wilson integrity and separation of duty are realized in the policy as interpreted in terms of DBMS objects and their interrelationships.

Collaboration


Dive into the Barbara T. Blaustein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge