Linda H. Rosenberg
Unisys
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Linda H. Rosenberg.
international conference on software engineering | 1997
William Wilson; Linda H. Rosenberg; Lawrence E. Hyatt
It is generally accepted that poorly written requirements can result in software with the wrong functionality. The Goddard Space Flight Center’s (GSFC) Software Assurance Technology Center (SATC) has developed ,an early life cycle tool for assessing requirements that are specified in natural language. The tool searches the document for terms the SATC has identified as quality indicators, e.g. weak phrases. The reports produced by the tool are used to identify specification statements and structural areas of the requirements specification document that need to be improved. The metrics are used by project managers to recognize and preclude potential areas of risk. INTRODUCTION The SATC is part of the Office of Mission Assurance of the GSFC. The SATC’s mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software that they acquire or develop. The SATC’s efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. It is generally accepted that the earlier in the life cycle that potential risks are identified the easier it is to eliminate or manage the risk inducing conditions [l] . Despite the significant advantages attributed to the use of formal specification languages, their use has not become common practice. Because requirements that the acquirer expects the developer to contractually satisfy must ICSE 97 Boston MA USA O-89791-914-9/97/05 be understood by both parties, specifications are most often written in natural language. The use of natural language to prescribe complex, dynamic systems has at least three severe problems: ambiguity, inaccuracy and inconsistency [ 111. Many words and phrases have dual meanings which can be altered by the context in which they are used. For example, Webster’s New World Dictionary identifies three variations in meaning for the word “align”, seventeen for “measure”, and four for the word “model”. Weak sentence structure can also produce ambiguous statements. ‘Twenty seconds prior to engine shutdown anomalies shall be ignored.” could result in at least three different implementations. Using words such as “large”, “rapid”, and “many’ produces inaccurate requirement specifications. Even though the words “error”, “fault”, and “failure” have been precisely defined by the Institute of Electrical and Electronics Engineers (IEEE) [5] they are frequently used incorrectly. Defining a large, multidimensional capability within the limitations imposed by the two dimensional structure of a document can obscure the relationships between individual groups of requirements. The importance of correctly documenting requirements has caused the software industry to produce a significant number of aids [3] to the creation and management of the requirements specification documents and individual specifications statements. Very few of these aids assist in evaluating the quality of the requirements document or the individual specification statements. This situation has motivated the SATC to develop a tool to provide metrics that NASA project managers can use to assess the quality of their requirements specification documents and to identify risks that poorly specified requirements will introduce into their project. It must be emphasized that the tool does not attempt to assess the correctness of the requirements specified. It assesses the structure of the
Acta Astronautica | 1997
Lawrence E. Hyatt; Linda H. Rosenberg
Abstract The Software Assurance Technology Center (SATC) has developed a software metrics program consisting of goals, attributes and metrics to support the assessment of project status, risk, and product quality throughout the life cycle. The objective of the software metrics program is to assess risk areas at each phase of the development life cycle and project them into the future. The software development goals in the metrics program are evaluated by a set of attributes that help to define and classify risks. The attributes must be “measurable” by a set of metrics. These metrics must be based on data that is collectable within the confines of the software development process and must also be relevant to the quality attributes and risk assessment. This paper discusses the SATCs software risk assessment metrics program which meets these needs and is currently being applied to software developed for NASA. At each phase of the software development life cycle, attributes will be identified and metrics defined. Project data is used to demonstrate how the metric analysis was applied at that phase for risk assessment, and how that information could be used by management to manage project risks.
international conference on software engineering | 1997
Theodore Hammer; Linda H. Rosenberg; Lenore Huffman; Lawrence E. Hyatt
Requirements are written to completed software system. Linda Rosenberg, Ph.D. Unisys Federal Systems Code 300.1 Greenbelt, h4D 207712 spec@ the functionality of a Software systems are often released in segments called builds, each one adding new functionality and satisfying an additional set of requirements. New software requirements tools are allowing quality assurance engineers to develop and use new metrics to assist them in evaluating the relationships of requirements to tests, thus ensuring the required functionality in the new system. NASA‘s Goddard Space Flight Center is applying new tools and technology to measure the effectiveness of requirements testing. This paper discusses an effort that uses project data to demonstrate metrics effectiveness.
winter simulation conference | 1997
James D. Arthur; Richard E. Nance; Robert G. Sargent; Dolores R. Wallace; Linda H. Rosenberg; Paul R. Muessig
Relative to Verification, Validation, and Accreditation (W&A) the Modeling and Simulation and Software Engineering communities share similar goals and impediments to achieving those goals. The focus of this panel is to explore how each community addresses the critical issues underlying VV&A. In this paper we provide nine (9) questions and four (4) sets of responses to those questions. The questions are intended to help reveal differences in VV&A emphasis and motivation between the two communities, and to establish a basis for the exchange of mutually beneficial ideas. Respondents
international symposium on software reliability engineering | 1998
Linda H. Rosenberg; Theodore Hammer; John G. Shaw
Archive | 1997
Linda H. Rosenberg; Lawrence E. Hyatt
Archive | 1996
Lawrence E. Hyatt; Linda H. Rosenberg
Archive | 1995
Linda H. Rosenberg; Lawrence E. Hyatt
Archive | 2000
Linda H. Rosenberg; Ruth Stapko; Albert M. Gallo
Archive | 1998
Theodore Hammer; Leonore L. Huffman; Linda H. Rosenberg