Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine D. McCollum is active.

Publication


Featured researches published by Catherine D. McCollum.


ieee symposium on security and privacy | 1997

Surviving information warfare attacks on databases

Paul Ammann; Sushil Jajodia; Catherine D. McCollum; Barbara T. Blaustein

We consider the problem of surviving information warfare attacks on databases. We adopt a fault tolerance approach to the different phases of an attack. To maintain precise information about the attack, we mark data to reflect the severity of detected damage as well as the degree to which the damaged data has been repaired. In the case of partially repaired data, integrity constraints might be violated, but data is nonetheless available to support mission objectives. We define a notion of consistency suitable for databases in which some information is known to be damaged, and other information is known to be only partially repaired. We present a protocol for normal transactions with respect to the damage markings and show that consistency preserving normal transactions maintain database consistency in the presence of damage. We present an algorithm for taking consistent snapshots of databases under attack. The snapshot algorithm has the virtue of not interfering with countermeasure transactions.


Communications of The ACM | 1999

Trusted recovery

Sushil Jajodia; Catherine D. McCollum; Paul Ammann

is of course necessary to take steps to prevent attacks from succeeding. At the same time, however, it is important to recognize that not all attacks can be averted at the outset. Attacks that succeed to some degree are unavoidable, and comprehensive support for identifying and responding to attacks is required [1]. Information warfare defense must consider the whole process of attack, response, and recovery. This requires a recognition of the multiple phases of the information warfare process. Prevention is just one phase; we explain others and then focus on the oft-Recent exploits by hackers have drawn attention to the importance of defending against potential information warfare. Defense and civil institutions rely so heavily on their information systems and networks that attacks that disable them could be devastating. Yet, as hacker attacks have demonstrated , protective mechanisms are fallible. Features and services that must be in place to carry out needed, legitimate functions can be abused by being used in unexpected ways to provide an avenue of attack. Further, an attacker who penetrates one system can use its relationships with other systems on the network to compromise them as well. Experiences of actual attacks have led to the recognition of the need to detect and react to attacks that succeed in breaching a systems protective mechanisms. Prevention and detection receive most of the attention, but recovery is an equally important phase of information warfare defense.


IEEE Computer | 1999

Surviving information warfare attacks

Sushil Jajodia; Paul Ammann; Catherine D. McCollum

The past few years have seen governmental, military, and commercial organizations widely adopt Web-based commercial technologies because of their convenience, ease of use, and ability to take advantage of rapid advances in the commercial market. With this increasing reliance on internetworked computer resources comes an increasing vulnerability to information warfare. In todays heavily networked environment, safety demands protection from both obvious and subtle intrusions that can delete or corrupt vital data. Traditionally, information systems security focuses primarily on prevention: putting controls and mechanisms in place that protect confidentiality, integrity, and availability by stopping users from doing bad things. Moreover, most mechanisms are powerless against misbehavior by legitimate users who perform functions for which they are authorized. The paper discusses traditional approaches and their limitations.


Proceedings of the tenth annual IFIP TC11/WG11.3 international conference on Database security: volume X : status and prospects: status and prospects | 1997

Multilevel secure transaction processing: status and prospects

Vijayalakshmi Atluri; Sushil Jajodia; Thomas F. Keefe; Catherine D. McCollum; Ravi Mukkamala

Since 1990, transaction processing in multilevel secure database management systems (DBMSs) has been receiving a great deal of attention from the database research community. Transaction processing in these systems requires modification of conventional scheduling algorithms and commit protocols. These modifications are necessary because preserving the usual transaction properties when transactions are executing at different security levels often conflicts with the enforcement of the security policy. Considerable effort has been devoted to the development of efficient, secure algorithms for the major types of secure DBMS architectures: kernelized, replicated, and distributed. An additional problem that arises uniquely in multilevel secure DBMSs is that of secure, correct execution when data at multiple security levels must be written within one transaction. Significant progress has been made in a number of these areas, and a few of the techniques have been incorporated into commercial trusted DBMS products. However, there are many open problems remain to be explored. This paper reviews the achievements to date in transaction processing for multilevel secure DBMSs. The paper provides an overview of transaction processing needs and solutions in conventional DBMSs as background, explains the constraints introduced by multilevel security, and then describes the results of research in multilevel secure transaction processing. Research results and limitations in concurrency control, multilevel transaction management, and secure commit protocols are summarized. Finally, important new areas are identified for secure transaction processing research.


annual computer security applications conference | 1998

Application-level isolation to cope with malicious database users

Sushil Jajodia; Peng Liu; Catherine D. McCollum

System protection mechanisms such as access controls can be fooled by authorized but malicious users, masqueraders, and misfeasors. Intrusion detection techniques are therefore used to supplement them. The capacity of these techniques, however is limited: innocent users may be mistaken for malicious ones while malicious users stay at large. Isolation is a method that has been applied to protect systems from damage while investigating further. This paper proposes the use of isolation at an application level to gain its benefits while minimizing loss of resources and productive work in the case of incidents later deemed innocent. We describe our scheme in the database context. It isolates the database transparently from further damage by users suspected to be malicious, while still maintaining continued availability for their transactions. Isolation is complicated by the inconsistencies that may develop between isolated database versions. We present both static and dynamic approaches to identify and resolve conflicts. Finally, we give several examples of applications in which the isolation scheme should be worthwhile and be able to achieve good performance.


ieee symposium on security and privacy | 1993

A model of atomicity for multilevel transactions

Barbara T. Blaustein; Sushil Jajodia; Catherine D. McCollum; LouAnna Notargiacomo

Data management applications that use multilevel database management system (DBMS) capabilities have the requirement to read and write objects at multiple levels within the bounds of a multilevel transaction. The authors define a new notion of atomicity that is meaningful within the constraints of the multilevel environment. They offer a model of multilevel atomicity that defines varying degrees of atomicity and recognizes that lower security level operations within a transaction must be able to commit or abort independently of higher security level operations. Execution graphs are provided as a tool for analyzing atomicity requirements in conjunction with internal semantic interdependencies among the operations of a transaction and rules for determining the greatest degree of atomicity are proved that can be attained for a given multilevel transaction. Several alternative transaction management algorithms that can be used to preserve multilevel atomicity are presented.<<ETX>>


Archive | 1993

Using Two-Phase Commit for Crash Recovery in Federated Multilevel Secure Database Management Systems

Sushil Jajodia; Catherine D. McCollum

In a federated database management system, a collection of autonomous database management systems (DBMSs) agree to cooperate to make data available for sharing and to process distributed retrieval and update queries. Distributed transactions can access data across multiple DBMSs. Securing such an environment requires a method that coordinates processing of these distributed requests to provide distributed transaction atomicity without security compromise. An open question is how much of its scheduling process an individual DBMS must expose to the federation in order to allow sufficient coordination of distributed transactions. In this paper, we address the application of the two-phase commit protocol, which is emerging as the dominant method of providing transaction atomicity for crash recovery in the conventional (single-level) distributed DBMS area, to the federated multilevel secure (MLS) DBMS environment. We discuss the limits of its applicability and identify the conditions that must be satisfied by the individual DBMSs in order to participate in the federation.


annual computer security applications conference | 1999

Application-level isolation using data inconsistency detection

Amgad Fayad; Sushil Jajodia; Catherine D. McCollum

Recently, application-level isolation was introduced as an effective means of containing the damage that a suspicious user could inflict on data. In most cases, only a subset of the data items needs to be protected from damage due to the criticality level or integrity requirements of the data items. In such a case, complete isolation of a suspicious user can consume more resources than necessary. The paper proposes partitioning the data items into categories based on their criticality levels and integrity requirements; these categories determine the allowable data flows between trustworthy and suspicious users. An algorithm that achieves good performance when the number of data items is small, is also provided to detect inconsistencies between suspicious versions of the data and the main version.


annual computer security applications conference | 1994

Benchmarking multilevel secure database systems using the MITRE benchmark

Vinti Doshi; William R. Herndon; Sushil Jajodia; Catherine D. McCollum

Multilevel secure (MLS) DBMSs are subject to a number of security-related architectural and functional factors that affect performance. These factors include, among others, the distribution of data among security levels, the session levels at which queries are run, and how the database is physically partitioned into files. In this paper, we present a benchmark methodology, a test database design, and a query suite designed to quantify this impact upon query processing. We introduce three metrics (uniformity, scale-up and speed-up) that characterize DBMS performance with varying data distributions. Finally, we provide comparisons and analysis of the results of a number of actual benchmarking experiments using DBMSs representative of the two major MLS DBMS architectures (trusted-subject and TCB-subset).<<ETX>>


Proceedings of the IFIP TC11 WG11.3 Eleventh International Conference on Database Securty XI: Status and Prospects | 1997

Distributed Object Technologies, Databases and Security

Catherine D. McCollum; Donald B. Faatz; William R. Herndon; E. John Sebes; Roshan K. Thomas

Distributed object technologies offer promise for improvements in system flexibility and evolvability but pose new challenges for both data management and security. Data management can be implemented either through distributed object interfaces added to a conventional database system architecture or through decoupled, distributed components that coordinate to provide database capabilities. Key security issues include security context and responsibilities in n-tiered architectures and in decoupled data management components, security-awareness of applications, and assurance.

Collaboration


Dive into the Catherine D. McCollum's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Ammann

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Peng Liu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas F. Keefe

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge