Matthew Stephan
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Stephan.
IEEE Transactions on Software Engineering | 2009
Michal Antkiewicz; Krzysztof Czarnecki; Matthew Stephan
Framework-specific modeling languages (FSMLs) help developers build applications based on object-oriented frameworks. FSMLs model abstractions and rules of application programming interfaces (APIs) exposed by frameworks and can express models of how applications use APIs. Such models aid developers in understanding, creating, and evolving application code. We present four exemplar FSMLs and a method for engineering new FSMLs. The method was created postmortem by generalizing the experience of building the exemplars and by specializing existing approaches to domain analysis, software development, and quality evaluation of models and languages. The method is driven by the use cases that the FSML under development should support and the evaluation of the constructed FSML is guided by two existing quality frameworks. The method description provides concrete examples for the engineering steps, outcomes, and challenges. It also provides strategies for making engineering decisions. Our work offers a concrete example of software language engineering and its benefits. FSMLs capture existing domain knowledge in language form and support application code understanding through reverse engineering, application code creation through forward engineering, and application code evolution through round-trip engineering.
international conference on software maintenance | 2012
Manar H. Alalfi; James R. Cordy; Thomas R. Dean; Matthew Stephan; Andrew Stevenson
While graph-based techniques show good results in finding exactly similar subgraphs in graphical models, they have great difficulty in finding near-miss matches. Text-based clone detectors, on the other hand, do very well with near-miss matching in source code. In this paper we introduce SIMONE, an adaptation of the mature text-based code clone detector NICAD to the efficient identification of structurally meaningful near-miss subsystem clones in graphical models. By transforming graph-based models to normalized text form, SIMONE extends NICAD to identify near-miss subsystem clones in Simulink models, uncovering important model similarities that are difficult to find in any other way.
international workshop on software clones | 2012
Matthew Stephan; Manar H. Alafi; Andrew Stevenson; James R. Cordy
In this position paper we briefly review the Simulink model clone detection approaches in literature, including a new one currently being developed, and outline our plan for an experimental comparison. We are using public and private Simulink models to compare approaches based on clone relevance, performance, types of clones detected, user interaction, adaptability, and the ability to identify recurring patterns using a combination of manual inspection and model visualization.
international conference on software engineering | 2013
Matthew Stephan; Manar H. Alalfi; Andrew Stevenson; James R. Cordy
Model-clone detection is a relatively new area and there are a number of different approaches in the literature. As the area continues to mature, it becomes necessary to evaluate and compare these approaches and validate new ones that are introduced. We present a mutation-analysis based model-clone detection framework that attempts to automate and standardize the process of comparing multiple Simulink model-clone detection tools or variations of the same tool. By having such a framework, new research directions in the area of model-clone detection can be facilitated as the framework can be used to validate new techniques as they arise. We begin by presenting challenges unique to model-clone tool comparison including recall calculation, the nature of the clones, and the clone report representation. We propose our framework, which we believe addresses these challenges. This is followed by a presentation of the mutation operators that we plan to inject into our Simulink models that will introduce variations of all the different model clone types that can then be searched for by each respective model-clone detector.
international workshop on software clones | 2012
Manar H. Alalfi; James R. Cordy; Thomas R. Dean; Matthew Stephan; Andrew Stevenson
This paper describes our plan to adapt mature code-based clone detection techniques to the efficient identification of near-miss clones in models. Our goal is to leverage successful source text-based clone detection techniques by transforming graph-based models to normalized text form in order to capture semantically meaningful near-miss results that can help in further model analysis tasks. In this position paper we present a first example, adapting the NiCad code clone detector to identifying near-miss Simulink model clones at the “system” granularity. In current work we are extending this technique to the Simulink (entire) “model” and (more refined) “block” granularities as well.
international conference on software maintenance | 2014
Manar H. Alalfi; Eric J. Rapos; Andrew Stevenson; Matthew Stephan; Thomas R. Dean; James R. Cordy
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
international conference on software testing verification and validation workshops | 2014
Matthew Stephan; Manar H. Alalfi; James R. Cordy
A relatively new and important branch of Mutation Analysis involves model mutations. In our attempts to realize model-clone detector testing, we found that there was little mutation research on Simulink, which is a fairly prevalent modeling language, especially in embedded domains. Because Simulink model mutations are the crux of our model-clone detector testing framework, we want to ensure that we are selecting the appropriate mutations. In this paper, we propose a taxonomy of Simulink model mutations, which is based on our experiences thus far with Simulink, that aims to inject model clones of various types and is fairly representative of realistic Simulink edit operations. We organize the mutations by categories based on the types of model clones they will inject, and further break them down into mutation classes. For each class, we define the characteristics of mutation operators belonging to that class and demonstrate an example operator. Lastly, in an attempt to validate our taxonomy, we perform a case study on multiple versions of three Simulink projects, including an industrial project, to ascertain if the actual subsystem edit operations observed across versions can be classified using our taxonomy and present any interesting cases. While we developed the taxonomy with the specific goal of facilitating and guiding the injection of mutants for model clones, we believe it is fairly general and a solid foundation for future Simulink model mutation work.
international conference on software maintenance | 2014
Matthew Stephan
Model Clone Detection is a growing area within the field of software model maintenance. New model clone detection techniques and tools for different types of models are being created, however, there is no clear way of objectively and quantitatively evaluating and comparing them. In this paper, we provide a synopsis of our work in devising and validating an evaluation framework that uses Mutation Analysis to provide such a facility. In order to demonstrate the frameworks feasibility and also walk through its steps, we implement a framework implementation for evaluating Simulink model clone detectors. This includes a taxonomy of Simulink mutations, Simulink clone report transformations, and more. We outline how the framework calculates precision and recall, and do so on multiple Simulink model clone detectors. In addition, we also discuss areas of future work, including semantic clone mutations, and developing framework implementations for other model types, like UML. Lastly, we address some lessons we learned during the Ph.D. Process, such as partitioning the work into logical, self-contained, milestones, and being open and willing to engage in other research. We hope that our framework will help cultivate further research gains in Model Clone Detection.
model driven engineering languages and systems | 2015
Matthew Stephan; James R. Cordy
One challenge facing the Model-Driven Engineering community is the need for model quality assurance. Specifically, there should be better facilities for analyzing models automatically. One measure of quality is the presence or absence of good and bad properties, such as patterns and antipatterns, respectively. We elaborate on and validate our earlier idea of detecting patterns in model-based systems using model clone detection by devising a Simulink antipattern instance detector. We chose Simulink because it is prevalent in industry, has mature model clone detection techniques, and interests our industrial partners. We demonstrate our technique using near-miss cross-clone detection to find instances of Simulink antipatterns derived from the literature in four sets of public Simulink projects. We present our detection results, highlight interesting examples, and discuss potential improvements to our approach. We hope this work provides a first step in helping practitioners improve Simulink model quality and further research in the area.
Software Testing, Verification & Reliability | 2018
Matthew Stephan; James R. Cordy
Model‐driven engineering is an increasingly prevalent approach in software engineering where models are the primary artifacts throughout a projects life cycle. A growing form of analysis and quality assurance in these projects is model clone detection, which identifies similar model elements. As model clone detection research and tools emerge, methods must be established to assess model clone detectors and techniques. In this paper, we describe the MuMonDE framework, which researchers and practitioners can use to evaluate model clone detectors using mutation analysis on the models each detector is geared towards. MuMonDE applies mutation testing in a novel way by randomly mutating model elements within existing projects to emulate various types of clones that can exist within that domain. It consists of 2 main phases. The mutation phase involves determining the mutation targets, selecting the appropriate mutation operations, and injecting mutants. The second phase, evaluation, involves detecting model clones, preprocessing clone reports, analyzing those reports to calculate recall and precision, and visualizing the data. We introduce MuMonDE by describing each phase in detail. We present our experiences and examples in successfully developing a MuMonDE implementation capable of evaluating Simulink model clone detectors. We validate MuMonDE by demonstrating its ability to answer evaluation questions and provide insights based on the data it generates. With this research using mutation analysis, our goal is to improve model clone detection and its analytical capabilities, thus improving model‐driven engineering as a whole.