Eric J. Rapos
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric J. Rapos.
international conference on software maintenance | 2014
Manar H. Alalfi; Eric J. Rapos; Andrew Stevenson; Matthew Stephan; Thomas R. Dean; James R. Cordy
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
international conference on software testing verification and validation | 2012
Eric J. Rapos; Juergen Dingel
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
2016 IEEE/ACM 8th International Workshop on Modeling in Software Engineering (MiSE) | 2016
Eric J. Rapos; James R. Cordy
This paper presents an industrial case study that explores the co-evolution relationship between Matlab Simulink Models and their associated test suites. Through an analysis of differences between releases of both the models and their tests, we are able to determine what the relation between the model evolution and test evolution is, or if one exists at all. Using this comparison methodology, we present empirical results from a production system of 64 Matlab Simulink Models evolving over 9 releases. In our work we show that in this system there is a strong co-evolution relationship (a correlation value of r = 0:9; p
international conference on software testing verification and validation | 2015
Eric J. Rapos
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model- based test co-evolution of tests alongside source models and presenting results to test engineers.
international conference on software maintenance | 2014
Eric J. Rapos
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model- based test co-evolution of tests alongside source models and presenting results to test engineers.
source code analysis and manipulation | 2015
Eric J. Rapos; Andrew Stevenson; Manar H. Alalfi; James R. Cordy
SimNav is a graphical user interface designed for displaying and navigating clone classes of Simulink models detected by the model clone detector Simone. As an embedded Simulink interface tool, SimNav allows model developers to explore detected clones directly in their own model development environment rather than a separate research tool interface. SimNav allows users to open selected models for side-by-side comparison, in order to visually explore clone classes and view the differences in the clone instances, as well as to explore the context in which the clones exist. This tool paper describes the motivation, implementation, and use cases for SimNav.
international conference on software testing verification and validation | 2015
Eric J. Rapos; Jürgen Dingel
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that would prevent unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases generated from these execution; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
international conference on software maintenance | 2017
Eric J. Rapos; James R. Cordy
With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. Part of model management comes in the form of impact analysis - the ability to determine the impact of a change to a model on related artifacts such as test cases and other models. This paper presents an approach to impact analysis for Simulink models, and a tool to implement it (SimPact). We validate our tool as an impact predictor against the maintenance history of a large set of industrial models and their tests. The results show a high level of both precision and recall in predicting actual impact of model changes on tests.
international conference on software testing verification and validation | 2018
Eric J. Rapos; James R. Cordy
Archive | 2017
Eric J. Rapos