Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ioana Rus is active.

Publication


Featured researches published by Ioana Rus.


IEEE Computer | 2000

How perspective-based reading can improve requirements inspections

Forrest Shull; Ioana Rus; Victor R. Basili

Because defects constitute an unavoidable aspect of software development, discovering and removing them early is crucial. Overlooked defects (like faults in the software system requirements, design, or code) propagate to subsequent development phases where detecting and correcting them becomes more difficult. At best, developers will eventually catch the defects, but at the expense of schedule delays and additional product-development costs. At worst, the defects will remain, and customers will receive a faulty product. The authors explain their perspective based reading (PBR) technique that provides a set of procedures to help developers solve software requirements inspection problems. PBR reviewers stand in for specific stakeholders in the document to verify the quality of requirements specifications. The authors show how PBR leads to improved defect detection rates for both individual reviewers and review teams working with unfamiliar application domains.


Journal of Knowledge Management | 2003

Software systems support for knowledge management

Mikael Lindvall; Ioana Rus; Sachin Suman Sinha

Human capital is the main asset of many companies, whose knowledge has to be preserved and leveraged from individual to the company level, allowing continual learning and improvement. Knowledge management has various components and aspects such as socio‐cultural, organizational, and technological. In this paper we address the technological aspect; more precisely we survey available software systems that support different knowledge management activities. We categorize these tools into classes, based on their capabilities and functionality and show what tasks and knowledge processing operations they support.


Journal of Systems and Software | 1999

Software process simulation for reliability management

Ioana Rus; James S. Collofello; Peter Lakey

This paper describes the use of a process simulator to support software project planning and management. The modeling approach here focuses on software reliability, but is just as applicable to other software quality factors, as well as to cost and schedule factors. The process simulator was developed as a part of a decision support system for assisting project managers in planning or tailoring the software development process, in a quality driven manner. The original simulator was developed using the system dynamics approach. As the model evolved by applying it to a real software development project, a need arose to incorporate the concepts of discrete event modeling. The system dynamics model and discrete event models each have unique characteristics that make them more applicable in specific situations. The continuous model can be used for project planning and for predicting the eAect of management and reliability engineering decisions. It can also be used as a training tool for project managers. The discrete event implementation is more detailed and therefore more applicable to project tracking and control. In this paper the structure of the system dynamics model is presented. The use of the discrete event model to construct a software reliability prediction model for


Lecture Notes in Computer Science | 2002

Technology Support for Knowledge Management

Mikael Lindvall; Ioana Rus; Sachin Suman Sinha

Human capital is the main asset of software organizations. Knowledge has to be preserved and leveraged from individuals to the organization. Thus, the learning software organization will be able to continually learn and improve. Knowledge management has various components and multiple aspects such as socio-cultural, organizational, and technological. In this paper we address the technological aspect; specifically, we survey the tools available to support different knowledge management activities. We categorize these tools into classes, based on their capabilities and functionality, and show what tasks and knowledge processing operations they support.


Archive | 2003

Knowledge Management for Software Organizations

Mikael Lindvall; Ioana Rus

This chapter presents an introduction to the topic of knowledge management (KM) in software engineering. It identifies the need for knowledge, knowledge items and sources, and discusses the importance of knowledge capture, organization, retrieval, access, and evolution in software development organizations. KM activities and supporting tools for software development and inter- and intra-organization learning are presented. The state of the implementation of KM in software organizations is examined, together with benefits, challenges, and lessons learned.


international phoenix conference on computers and communications | 1996

Modeling software testing processes

James S. Collofello; Zhen Yang; John D. Tvedt; Derek Merrill; Ioana Rus

The production of a high quality software product requires application of both defect prevention and defect detection techniques. A common defect detection strategy is to subject the product to several phases of testing such as unit, integration, and system. These testing phases consume significant project resources and cycle time. As software companies continue to search for ways for reducing cycle time and development costs while increasing quality, software testing processes emerge as a prime target for investigation. The paper proposes the utilization of system dynamics models for better understanding testing processes. Motivation for modeling testing processes is presented along with a an executable model of the unit test phase. Some sample model runs are described to illustrate the usefulness of the model.


Innovations in Systems and Software Engineering | 2005

An evolutionary testbed for software technology evaluation

Mikael Lindvall; Ioana Rus; Forrest Shull; Marvin V. Zelkowitz; Paolo Donzelli; Atif M. Memon; Victor R. Basili; Patricia Dockhorn Costa; Roseanne Tesoriero Tvedt; Lorin Hochstein; Sima Asgari; Christopher Ackermann; Daniel Pech

Abstract.Empirical evidence and technology evaluation are needed to close the gap between the state of the art and the state of the practice in software engineering. However, there are difficulties associated with evaluating technologies based on empirical evidence: insufficient specification of context variables, cost of experimentation, and risks associated with trying out new technologies. In this paper, we propose the idea of an evolutionary testbed for addressing these problems. We demonstrate the utility of the testbed in empirical studies involving two different research technologies applied to the testbed, as well as the results of these studies. The work is part of NASA’s High Dependability Computing Project (HDCP), in which we are evaluating a wide range of new technologies for improving the dependability of NASA mission-critical systems.


international conference on software engineering | 2001

Understanding IV&IV in a safety critical and complex evolutionary environment: the nasa space shuttle program

Marvin V. Zelkowitz; Ioana Rus

The National Aeronautics and Space Administration is an internationally recognized leader in space science and exploration. NASA recognizes the inherent risk associated with space exploration; however, NASA makes every reasonable effort to minimize that risk. To that end for the Space Shuttle program NASA instituted a software independent verification and validation (IV&V) process in 1988 to ensure that the Shuttle and its crew are not exposed to any unnecessary risks. Using data provided by both the Shuttle software developer and the IV&V contractor, in this paper we describe the overall IV&V process as used on the Space Shuttle program and provide an analysis of the use of metrics to document and control this process. Our findings reaffirm the value of IV&V and show the impact IV&V has on multiple releases of a large complex software system.


Software Quality Journal | 2005

Virtual Software Engineering Laboratories in Support of Trade-off Analyses

Jürgen Münch; Dietmar Pfahl; Ioana Rus

Due to demanding customer needs and evolving technology, software organizations are forced to trade individual functional and non-functional product quality profiles against other factors such as cost, time, or productivity. The ability to influence or even control these factors requires a deep understanding of the complex relations between process and product attributes in relevant contexts. Based on such understanding, decision support is needed to adjust processes so that they match the product quality goals without violating given project constraints. We propose to use a Virtual Software Engineering Laboratory (VSEL) to establish such decision support cost-effectively. VSELs can be considered as being complementary to existing (empirical) Software Engineering Laboratories. This paper gives an introduction into the cornerstones of VSELs, discusses how they complement traditional empirically based Software Engineering Laboratories (SELs), and illustrates with the help of case examples from industrial and research environments, how to use them in support of product-focused trade-off analyses.


Empirical Software Engineering | 2007

Experimenting with software testbeds for evaluating new technologies

Mikael Lindvall; Ioana Rus; Paolo Donzelli; Atif M. Memon; Marvin V. Zelkowitz; Aysu Betin-Can; Tevfik Bultan; Christopher Ackermann; Bettina Anders; Sima Asgari; Victor R. Basili; Lorin Hochstein; Jörg Fellmann; Forrest Shull; Roseanne Tesoriero Tvedt; Daniel Pech; Daniel Hirschbach

The evolution of a new technology depends upon a good theoretical basis for developing the technology, as well as upon its experimental validation. In order to provide for this experimentation, we have investigated the creation of a software testbed and the feasibility of using the same testbed for experimenting with a broad set of technologies. The testbed is a set of programs, data, and supporting documentation that allows researchers to test their new technology on a standard software platform. An important component of this testbed is the Unified Model of Dependability (UMD), which was used to elicit dependability requirements for the testbed software. With a collection of seeded faults and known issues of the target system, we are able to determine if a new technology is adept at uncovering defects or providing other aids proposed by its developers. In this paper, we present the Tactical Separation Assisted Flight Environment (TSAFE) testbed environment for which we modeled and evaluated dependability requirements and defined faults to be seeded for experimentation. We describe two completed experiments that we conducted on the testbed. The first experiment studies a technology that identifies architectural violations and evaluates its ability to detect the violations. The second experiment studies model checking as part of design for verification. We conclude by describing ongoing experimental work studying testing, using the same testbed. Our conclusion is that even though these three experiments are very different in terms of the studied technology, using and re-using the same testbed is beneficial and cost effective.

Collaboration


Dive into the Ioana Rus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Raffo

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Paul Wernick

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek Merrill

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

John D. Tvedt

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Yang

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge