Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ajitha Rajan is active.

Publication


Featured researches published by Ajitha Rajan.


formal methods | 2008

Requirements Coverage as an Adequacy Measure for Conformance Testing

Ajitha Rajan; Michael W. Whalen; Matt Staats; Mats Per Erik Heimdahl

Conformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for conformance testing should consider structural coverage over the requirementseither in place of or in addition to structural coverage over the model. Measuring structural coverage over the requirements gives a notion of how well the conformance tests exercise the required behavior of the system. We conducted an experiment to investigate the hypothesis stating structural coverage over formal requirements is more effective than structural coverage over the model as an adequacy measure for conformance testing. We found that the hypothesis was rejected at 5% statistical significance on three of the four case examples in our experiment. Nevertheless, we found that the tests providing requirements coverage found several faults that remained undetected by tests providing model coverage. We thus formed a second hypothesis stating that complementing model coverage with requirements coverage will prove more effective as an adequacy measure than solely using model coverage for conformance testing. In our experiment, we found test suites providing both requirements coverage and model coverage to be more effective at finding faults than test suites providing model coverage alone, at 5% statistical significance. Based on our results, we believe existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.


international symposium on software testing and analysis | 2006

Coverage metrics for requirements-based testing

Michael W. Whalen; Ajitha Rajan; Mats Per Erik Heimdahl; Steven P. Miller

In black-box testing, one is interested in creating a suite of tests from requirements that adequately exercise the behavior of a software system without regard to the internal structure of the implementation. In current practice, the adequacy of black box test suites is inferred by examining coverage on an executable artifact, either source code or a software model.In this paper, we define structural coverage metrics directly on high-level formal software requirements. These metrics provide objective, implementation-independent measures of how well a black-box test suite exercises a set of requirements. We focus on structural coverage criteria on requirements formalized as LTL properties and discuss how they can be adapted to measure finite test cases. These criteria can also be used to automatically generate a requirements-based test suite. Unlike model or code-derived test cases, these tests are immediately traceable to high-level requirements. To assess the practicality of our approach, we apply it on a realistic example from the avionics domain.


international conference on software engineering | 2008

The effect of program and model structure on mc/dc test adequacy coverage

Ajitha Rajan; Michael W. Whalen; Mats Per Erik Heimdahl

In avionics and other critical systems domains, adequacy of test suites is currently measured using the MC/DC metric on source code (or on a model in model-based development). We believe that the rigor of the MC/DC metric is highly sensitive to the structure of the implementation and can therefore be misleading as a test adequacy criterion. We investigate this hypothesis by empirically studying the effect of program structure on MC/DC coverage. To perform this investigation, we use six realistic systems from the civil avionics domain and two toy examples. For each of these systems, we use two versions of their implementation-with and without expression folding (i.e., inlining). To assess the sensitivity of MC/DC to program structure, we first generate test suites that satisfy MC/DC over a non-inlined implementation. We then run the generated test suites over the inlined implementation and measure MC/DC achieved. For our realistic examples, the test suites yield an average reduction of 29.5% in MC/DC achieved over the inlined implementations at 5% statistical significance level.


asia-pacific software engineering conference | 2006

Interaction Testing in Model-Based Development: Effect on Model-Coverage

Renée C. Bryce; Ajitha Rajan; Mats Per Erik Heimdahl

Model-based software development is gaining interest in domains such as avionics, space, and automotives. The model serves as the central artifact for the development efforts (such as, code generation), therefore, it is crucial that the model be extensively validated. Automatic generation of interaction test suites is a candidate for partial automation of this model validation task. Interaction testing is a combinatorial approach that systematically tests all t-way combinations of inputs for a system. In this paper, we report how well interaction test suites (2-way through 5-way interaction test suites) structurally cover a model of the mode- logic of a flight guidance system. We conducted experiments to (1) compare the coverage achieved with interaction test suites to that of randomly generated tests and (2) determine if interaction test suites improve the coverage of black-box test suites derived from system requirements. The experiments show that the interaction test suites provide little benefit over the randomly generated tests and do not improve coverage of the requirements-based tests. These findings raise questions on the application of interaction testing in this domain.


ieee/aiaa digital avionics systems conference | 2008

On MC/DC and implementation structure: An empirical study

Mats Per Erik Heimdahl; Michael W. Whalen; Ajitha Rajan; Matt Staats

In civil avionics, obtaining DO-178B certification for highly critical airborne software requires that the adequacy of the code testing effort be measured using a structural coverage criterion known as Modified Condition and Decision Coverage (MC/DC). We hypothesized that the effectiveness of the MC/DC metric is highly sensitive to the structure of the implementation and can therefore be problematic as a test adequacy criterion. We tested this hypothesis by evaluating the fault-finding ability of MC/DC-adequate test suites on five industrial systems (flight guidance and display management). For each system, we created two versions of the implementations-implementations with and without expression folding (i.e., inlining).


high-assurance systems engineering | 2007

Model Validation using Automatically Generated Requirements-Based Tests

Ajitha Rajan; Michael W. Whalen; Mats Per Erik Heimdahl

In current model-based development practice, validation that we are building a correct model is achieved by manually deriving requirements-based test cases for model testing. Model validation performed this way is time consuming and expensive, particularly in the safety critical systems domain where high confidence in the model correctness is required. In an effort to reduce the validation effort, we propose an approach that automates the generation of requirements- based tests for model validation purposes. Our approach uses requirements formalized as LTL properties as a basis for test generation. Test cases are generated to provide rigorous coverage over these formal properties. We use an abstract model in this paper-called the Requirements Model-generated from requirements and environmental constraints for automated test case generation. We illustrate and evaluate our approach using three realistic or production examples from the avionics domain. The proposed approach was effective on two of the three examples used, owing to their extensive and well defined set of requirements.


international conference on software engineering | 2015

Optimising energy consumption of design patterns

Adel Noureddine; Ajitha Rajan

Software design patterns are widely used in software engineering to enhance productivity and maintainability.However, recent empirical studies revealed the high energy overhead in these patterns. Our vision is to automatically detect and transform design patterns during compilation for better energy efficiency without impacting existing coding practices. In this paper, we propose compiler transformations for two design patterns, Observer and Decorator, and perform an initial evaluation of their energy efficiency.


automated software engineering | 2006

Coverage Metrics to Measure Adequacy of Black-Box Test Suites

Ajitha Rajan

In black-box testing, one is interested in creating a suite of tests from requirements that adequately exercise the behavior of a software system without regard to the internal structure of the implementation. In current practice, the adequacy of black-box test suites is inferred by examining coverage on an executable artifact, either source code or a software model. We propose the notion of defining structural coverage metrics directly on high-level formal software requirements. These metrics provide objective, implementation-independent measures of how well a black-box test suite exercises a set of requirements. We focus on structural coverage criteria on requirements formalized as linear temporal logic (LTL) properties and explore how they can be adapted to measure finite test cases. These criteria can also be used to automatically generate a requirements-based test suite. Unlike model or code-derived test cases, these tests are immediately traceable to high-level requirements


automated software engineering | 2014

Accelerated test execution using GPUs

Ajitha Rajan; Subodh Sharma; Peter Schrammel; Daniel Kroening

As product life-cycles become shorter and the scale and complexity of systems increase, accelerating the execution of large test suites gains importance. Existing research has primarily focussed on techniques that reduce the size of the test suite. By contrast, we propose a technique that accelerates test execution, allowing test suites to run in a fraction of the original time, by parallel execution with a Graphics Processing Unit (GPU). Program testing, which is in essence execution of the same program withmmultiple sets of test data, naturally exhibits the kind of data parallelism that can be exploited with GPUs. Our approach simultaneously executes the program with one test case per GPU thread. GPUs have severe limitations, and we discuss these in the context of our approach and define the scope of our applications. We observe speed-ups up to a factor of 27 compared to single-core execution on conventional CPUs with embedded systems benchmark programs.


ACM Sigsoft Software Engineering Notes | 2006

Automated requirements-based test case generation

Ajitha Rajan

Black-box testing is a technique in which test cases are derived from requirements without regard to the internal structure of the implementation. In current practice, the black-box test cases are derived manually from requirements. Manually deriving test cases from requirements is a costly and time consuming process. In this paper, we present the notion of autogenerating black-box test cases from requirements, that can result in dramatic time and cost savings.To accomplish this, we use requirements formalized as temporal logic properties. We define coverage metrics directly on the structure of the formalized requirements, and use an automated test case generation tool, like the model checker, to generate test cases from formal requirements that satisfy the desired criteria. To evaluate the effectiveness of black-box test suites generated in this manner, we measure the implementation coverage achieved by the test suites, and their fault-finding effectiveness. In [11], we conducted a preliminary investigation using a close to production model of a Flight Guidance System developed at Rockwell Collins Inc. We autogenerated requirements-based test suites for three different requirements coverage criteria and evaluated them by measuring the implementation coverage achieved.

Collaboration


Dive into the Ajitha Rajan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Staats

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Vanya Yaneva

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

German Vega

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge