Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles B. Haley is active.

Publication


Featured researches published by Charles B. Haley.


IEEE Transactions on Software Engineering | 2008

Security Requirements Engineering: A Framework for Representation and Analysis

Charles B. Haley; Robin C. Laney; Jonathan D. Moffett; Bashar Nuseibeh

This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.


international conference on software engineering | 2006

A framework for security requirements engineering

Charles B. Haley; Jonathan D. Moffett; Robin C. Laney; Bashar Nuseibeh

This paper presents a framework for security requirements elicitation and analysis, based upon the construction of a context for the system and satisfaction arguments for the security of the system. One starts with enumeration of security goals based on assets in the system. These goals are used to derive security requirements in the form of constraints. The system context is described using a problem-centered notation, then this context is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument is in two parts: a formal argument that the system can meet its security requirements, and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context, or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems.


aspect-oriented software development | 2004

Deriving security requirements from crosscutting threat descriptions

Charles B. Haley; Robin C. Laney; Bashar Nuseibeh

It is generally accepted that early determination of the stakeholder requirements assists in the development of systems that better meet the needs of those stakeholders. General security requirements frustrate this goal because it is difficult to determine how they affect the functional requirements of the system.This paper illustrates how representing threats as crosscutting concerns aids in determining the effect of security requirements on the functional requirements. Assets (objects that have value in a system) are first enumerated, and then threats on these assets are listed. The points where assets and functional requirements join are examined to expose vulnerabilities to the threats. Security requirements, represented as constraints, are added to the functional requirements to reduce the scope of the vulnerabilities. These requirements are used during the analysis and specification process, thereby incorporating security concerns into the functional requirements of the system.


ieee international conference on requirements engineering | 2004

The effect of trust assumptions on the elaboration of security requirements

Charles B. Haley; Robin C. Laney; Jonathan D. Moffett; Bashar Nuseibeh

Assumptions are frequently made during requirements analysis of a system-to-be about the trustworthiness of its various components (including human components). These trust assumptions can affect the scope of the analysis, derivation of security requirements, and in some cases, how functionality is realized. This work presents trust assumptions in the context of analysis of security requirements. A running example shows how trust assumptions can be used by a requirements engineer to help define and limit the scope of analysis and to document the decisions made during the process. The paper concludes with a case study examining the impact of trust assumptions on software that uses the secure electronic transaction (SET) specification.


IEEE Computer | 2009

Securing the Skies: In Requirements We Trust

Bashar Nuseibeh; Charles B. Haley; Craig Foster

The authors describe their experiences applying a security requirements analysis to an air traffic control project using a framework that offers different forms of structured argumentation. In deploying the framework, they also learned several lessons about security requirements.


automated software engineering | 2010

Tool support for code generation from a UMLsec property

Lionel Montrieux; Jan Jürjens; Charles B. Haley; Yijun Yu; Pierre-Yves Schobbens; Hubert Toussaint

This demo presents a tool to generate code from verified Role-Based Access Control properties defined using UMLsec. It can either generate Java code, or generate Java code for the UML model and AspectJ code for enforcing said RBA properties. Both approaches use the Java Authentication and Authorization Service (JAAS) to enforce access control.


secure software integration and reliability improvement | 2010

Model-Based Argument Analysis for Evolving Security Requirements

Thein Than Tun; Yijun Yu; Charles B. Haley; Bashar Nuseibeh

Software systems are made to evolve in response to changes in their contexts and requirements. As the systems evolve, security concerns need to be analysed in order to evaluate the impact of changes on the systems. We propose to investigate such changes by applying a meta-model of evolving security requirements, which draws on requirements engineering approaches, security analysis, argumentation and software evolution. In this paper, we show how the meta-model can be instantiated using a formalism of temporal logic, called the Event Calculus. The main contribution is a model based approach to argument analysis, supported by a tool which generates templates for formal descriptions of the evolving system. We apply our approach to several examples from an Air Traffic Management case study.


international conference on trust management | 2004

Picking Battles: The Impact of Trust Assumptions on the Elaboration of Security Requirements

Charles B. Haley; Robin C. Laney; Jonathan D. Moffett; Bashar Nuseibeh

This position paper describes work on trust assumptions in the context of security requirements. We show how trust assumptions can affect the scope of the analysis, derivation of security requirements, and in some cases how functionality is realized. An example shows how trust assumptions are used by a requirements engineer to help define and limit the scope of analysis and to document the decisions made during the process.


ieee international conference on requirements engineering | 2012

Privacy arguments: Analysing selective disclosure requirements for mobile applications

Thein Than Tun; Arosha K. Bandara; Blaine A. Price; Yijun Yu; Charles B. Haley; Inah Omoronyia; Bashar Nuseibeh

Privacy requirements for mobile applications offer a distinct set of challenges for requirements engineering. First, they are highly dynamic, changing over time and locations, and across the different roles of agents involved and the kinds of information that may be disclosed. Second, although some general privacy requirements can be elicited a priori, users often refine them at runtime as they interact with the system and its environment. Selectively disclosing information to appropriate agents is therefore a key privacy management challenge, requiring carefully formulated privacy requirements amenable to systematic reasoning. In this paper, we introduce privacy arguments as a means of analysing privacy requirements in general and selective disclosure requirements (that are both content- and context-sensitive) in particular. Privacy arguments allow individual users to express personal preferences, which are then used to reason about privacy for each user under different contexts. At runtime, these arguments provide a way to reason about requirements satisfaction and diagnosis. Our proposed approach is demonstrated and evaluated using the privacy requirements of BuddyTracker, a mobile application we developed as part of our overall research programme.


international symposium on information technology | 2008

Bridging requirements and architecture for systems of systems

Charles B. Haley; Bashar Nuseibeh

A system of systems (SoS) is formed from existing independent component systems. Some reasons these independent systems might be combined include a merger or acquisition, a temporary partnership, because of the formation of an integrated supply chain, or if a service-oriented architecture is used. SoSs are difficult to analyze because of the scale of the integration, the components’ independent existence, and the (potentially) conflicting nature of their requirements. We propose bridging between requirements analysis and architecture of an SoS by using an interdisciplinary approach. From Software Engineering we take iterating between requirements and architecture, and from Philosophy we take structured argumentation. Iterating between requirements and architecture is ideal for exposing issues with constructing a system of systems from existing artifacts. Structured argumentation is used to explore these issues, to inform the analysis, to reveal underlying assumptions in the analysis, and to help either establish system correctness to an acceptable level or provide rebuttals that invalidate the analysis.

Collaboration


Dive into the Charles B. Haley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maarten Sierhuis

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Jürjens

University of Koblenz and Landau

View shared research outputs
Researchain Logo
Decentralizing Knowledge