Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth I. Leonard is active.

Publication


Featured researches published by Elizabeth I. Leonard.


computer and communications security | 2006

Formal specification and verification of data separation in a separation kernel for an embedded system

Constance L. Heitmeyer; Myla Archer; Elizabeth I. Leonard; John McLean

Although many algorithms, hardware designs, and security protocols have been formally verified, formal verification of the security of software is still rare. This is due in large part to the large size of software, which results in huge costs for verification. This paper describes a novel and practical approach to formally establishing the security of code. The approach begins with a well-defined set of security properties and, based on the properties, constructs a compact security model containing only information needed to rea-son about the properties. Our approach was formulated to provide evidence for a Common Criteria evaluation of an embedded soft-ware system which uses a separation kernel to enforce data separation. The paper describes 1) our approach to verifying the kernel code and 2) the artifacts used in the evaluation: a Top Level Specification (TLS) of the kernel behavior, a formal definition of dataseparation, a mechanized proof that the TLS enforces data separation, code annotated with pre- and postconditions and partitioned into three categories, and a formal demonstration that each category of code enforces data separation. Also presented is the formal argument that the code satisfies the TLS.


IEEE Transactions on Software Engineering | 2008

Applying Formal Methods to a Certifiably Secure Software System

Constance L. Heitmeyer; Myla Archer; Elizabeth I. Leonard; John McLean

A major problem in verifying the security of code is that the codes large size makes it much too costly to verify in its entirety. This paper describes a novel and practical approach to verifying the security of code which substantially reduces the cost of verification. In this approach, a compact security model containing only information needed to reason about the security properties of interest is constructed and the security properties are represented formally in terms of the model. To reduce the cost of verification, the code to be verified is partitioned into three categories and only the first category, which is less than 10 percent of the code in our application, requires formal verification. The proof of the other two categories is relatively trivial. Our approach was developed to support a common criteria evaluation of the separation kernel of an embedded software system. This paper describes 1) our techniques and theory for verifying the kernel code and 2) the artifacts produced, that is, a top-level specification (TLS), a formal statement of the security property, a mechanized proof that the TLS satisfies the property, the partitioning of the code, and a demonstration that the code conforms to the TLS. This paper also presents the formal basis for the argument that the kernel code conforms to the TLS and consequently satisfies the security property.


Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 2003

Program Synthesis from Formal Requirements Specifications Using APTS

Elizabeth I. Leonard; Constance L. Heitmeyer

Formal specifications of software systems are extremely useful because they can be rigorously analyzed, verified, and validated, giving high confidence that the specification captures the desired behavior. To transfer this confidence to the actual source code implementation, a formal link is needed between the specification and the implementation. Generating the implementation directly from the specification provides one such link. A program transformation system such as Paiges APTS can be useful in developing a source code generator. This paper describes a case study in which APTS was used to produce code generators that construct C source code from a requirements specification in the SCR (Software Cost Reduction) tabular notation. In the study, two different code generation strategies were explored. The first strategy uses rewrite rules to transform the parse tree of an SCR specification into a parse tree for the corresponding C code. The second strategy associates a relation with each node of the specification parse tree. Each member of this relation acts as an attribute, holding the C code corresponding to the tree at the associated node; the root of the tree has the entire C program as its member of the relation. This paper describes the two code generators supported by APTS, how each was used to synthesize code for two example SCR requirements specifications, and what was learned about APTS from these implementations.


languages, compilers, and tools for embedded systems | 2006

Generating optimized code from SCR specifications

Tom Rothamel; Yanhong A. Liu; Constance L. Heitmeyer; Elizabeth I. Leonard

A promising trend in software development is the increasing adoption of model-driven design. In this approach, a developer first constructs an abstract model of the required program behavior in a language, such as Statecharts or Stateflow, and then uses a code generator to automatically transform the model into an executable program. This approach has many advantages---typically, a model is not only more concise than code and hence more understandable, it is also more amenable to mechanized analysis. Moreover, automatic generation of code from a model usually produces code with fewer errors than hand-crafted code.One serious problem, however, is that a code generator may produce inefficient code. To address this problem, this paper describes a method for generating efficient code from SCR (Software Cost Reduction) specifications. While the SCR tabular notation and tools have been used successfully to specify, simulate, and verify numerous embedded systems, until now SCR has lacked an automated method for generating optimized code. This paper describes an efficient method for automatic code generation from SCR specifications, together with an implementation and an experimental evaluation. The method first synthesizes an execution-flow graph from the specification, then applies three optimizations to the graph, namely, input slicing, simplification, and output slicing, and then automatically generates code from the optimized graph. Experiments on seven benchmarks demonstrate that the method produces significant performance improvements in code generated from large specifications. Moreover, code generation is relatively fast, and the code produced is relatively compact.


ieee international workshop on policies for distributed systems and networks | 2003

Analyzing security-enhanced Linux policy specifications

Myla Archer; Elizabeth I. Leonard; Matteo Pradella

NSAs security-enhanced (SE) Linux enhances Linux by providing a specification language for security policies and a flask-like architecture with a security server for enforcing policies defined in the language. It is natural for users to expect to be able to analyze the properties of a policy from its specification in the policy language. But this language is very low level, making the high level properties of a policy difficult to deduce by inspection. For this reason, tools to help users with the analysis are necessary. The NRL project on analyzing SE Linux policies aims first to use mechanized support to analyze an example policy specification and then to customize this support for use by practitioners in the open source software community. We describe the model policies in the analysis tool TAME, the kinds of analysis we can support, and prototype mechanical support to enable others to model their policies in TAME. We conclude with some general observations on desirable properties for a policy language.


formal methods | 2009

A Formal Method for Developing Provably Correct Fault-Tolerant Systems Using Partial Refinement and Composition

Ralph D. Jeffords; Constance L. Heitmeyer; Myla Archer; Elizabeth I. Leonard

It is widely agreed that building correct fault-tolerant systems is very difficult. To address this problem, this paper introduces a new model-based approach for developing masking fault-tolerant systems . As in component-based software development, two (or more) component specifications are developed, one implementing the required normal behavior and the other(s) the required fault-handling behavior. The specification of the required normal behavior is verified to satisfy system properties, whereas each specification of the required fault-handling behavior is shown to satisfy both system properties, typically weakened, and fault-tolerance properties, both of which can then be inferred of the composed fault-tolerant system. The paper presents the formal foundations of our approach, including a new notion of partial refinement and two compositional proof rules. To demonstrate and validate the approach, the paper applies it to a real-world avionics example.


automated software engineering | 2015

Building high assurance human-centric decision systems

Constance L. Heitmeyer; Marc Pickett; Elizabeth I. Leonard; Myla Archer; Indrakshi Ray; David W. Aha; J. Gregory Trafton

Many future decision support systems will be human-centric, i.e., require substantial human oversight and control. Because these systems often provide critical services, high assurance is needed that they satisfy their requirements. This paper, the product of an interdisciplinary research team of experts in formal methods, adaptive agents, and cognitive science, addresses this problem by proposing a new process for developing high assurance human-centric decision systems. This process uses AI (artificial intelligence) methods—i.e., a cognitive model to predict human behavior and an adaptive agent to assist the human—to improve system performance, and software engineering methods—i.e., formal modeling and analysis—to obtain high assurance that the system behaves as intended. The paper describes a new method for synthesizing a formal system model from Event Sequence Charts, a variant of Message Sequence Charts, and a Mode Diagram, a specification of system modes and mode transitions. It also presents results of a new pilot study investigating the optimal level of agent assistance for different users in which the agent design was evaluated using synthesized user models. Finally, it reviews a cognitive model for predicting human overload in complex human-centric systems. To illustrate the development process and our new techniques, we describe a human-centric decision system for controlling unmanned vehicles.


2013 2nd International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE) | 2013

High assurance human-centric decision systems

Constance L. Heitmeyer; Marc Pickett; Len Breslow; David W. Aha; J. Greg Trafton; Elizabeth I. Leonard

Many future decision support systems will be human-centric, i.e., require substantial human oversight and control. Because these systems often provide critical services, high assurance will be needed that they satisfy their requirements. How to develop “high assurance human-centric decision systems” is unknown: while significant research has been conducted in areas such as agents, cognitive science, and formal methods, how to apply and integrate the design principles and disparate models in each area is unclear. This paper proposes a novel process for developing human-centric decision systems where AI (artificial intelligence) methods-namely, cognitive models to predict human behavior and agents to assist the human-are used to achieve adequate system performance, and software engineering methods, namely, formal modeling and analysis, to obtain high assurance. To support this process, the paper introduces a software engineering technique-formal model synthesis from scenarios-and two AI techniques-a model for predicting human overload and user model synthesis from participant studies data. To illustrate the process and techniques, the paper describes a decision system controlling unmanned air vehicles.


formal methods | 2010

Model-based construction and verification of critical systems using composition and partial refinement

Ralph D. Jeffords; Constance L. Heitmeyer; Myla Archer; Elizabeth I. Leonard

This article introduces a new model-based method for incrementally constructing critical systems and illustrates its application to the development of fault-tolerant systems. The method relies on a special form of composition to combine software components and a set of proof rules to obtain high confidence of the correctness of the composed system. As in conventional component-based software development, two (or more) components are combined, but in contrast to many component-based approaches used in practice, which combine components consisting of code, our method combines components represented as state machine models. In the first phase of the method, a model is developed of the normal system behavior, and system properties are shown to hold in the model. In the second phase, a model of the required fault-handling behavior is developed and “or-composed” with the original system model to create a fault-tolerant extension which is, by construction, “fully faithful” (every execution possible in the normal system is possible in the fault-tolerant system). To model the fault-handling behavior, the set of states of the normal system model is extended through new state variables and new ranges for some existing state variables, and new fault-handling transitions are defined. Once constructed, the fault-tolerant extension is shown, using a set of property inheritance and compositional proof rules, to satisfy both the overall system properties, typically weakened, and selected fault-tolerance properties. These rules can often be used to verify the properties automatically. To provide a formal foundation for the method, formal notions of or-composition, partial refinement, fault-tolerant extension, and full faithfulness are introduced. To demonstrate and validate the method, we describe its application to a real-world, fault-tolerant avionics system.


Archive | 2013

On Model-Based Software Development

Constance L. Heitmeyer; Sandeep K. Shukla; Myla Archer; Elizabeth I. Leonard

Due to its many advantages, the growing use in software practice of Model-Based Development (MBD) is a promising trend. However, major problems in MBD of software remain, for example, the failure to integrate formal system requirements models with current code synthesis methods. This chapter introduces FMBD, a formal MBD process for building software systems which addresses this problem. The goal of FMBD is to produce high assurance software systems which are correct by construction. The chapter describes three types of models built during the FMBD process, provides examples from an avionics system to illustrate the models, and proposes three major challenges in MBD as topics for future research.

Collaboration


Dive into the Elizabeth I. Leonard's collaboration.

Top Co-Authors

Avatar

Myla Archer

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Constance L. Heitmeyer

Government of the United States of America

View shared research outputs
Top Co-Authors

Avatar

Ralph D. Jeffords

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

David W. Aha

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

John McLean

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Marc Pickett

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Indrakshi Ray

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

J. Greg Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Len Breslow

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge