Protocol for a Systematic Mapping Study on Collaborative Model-Driven Software Engineering
Mirco Franzago, Davide Di Ruscio, Ivano Malavolta, Henry Muccini
Protocol for a Systematic Mapping Study on Collaborative Model-Driven Software Engineering
Mirco Franzago, Ivano Malavolta, Henry Muccini, Davide Di Ruscio
Technical Report TRCS 001/2016 can be found on the same site.
Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica
Università degli Studi dell'Aquila
Via Vetoio Loc. Coppito
I-67010 L'Aquila, Italy
Technical Report TRCS Series a r X i v : . [ c s . S E ] N ov ipartimento di Ingegneria e Scienze dell’Informazione e Matematica Università degli Studi dell’Aquila P ROTOCOL FOR A S YSTEMATIC M APPING S TUDY ON
Collaborative Model-Driven Software Engineering M IRCO F RANZAGO (cid:5) I VANO M ALAVOLTA (cid:63) H ENRY M UCCINI (cid:5) D AVIDE D I R USCIO (cid:5)(cid:5)
University of L’Aquila
Via Giovanni Di Vincenzo 16/B - 67100 L’Aquila - Italy (cid:63)
GSSI Gran Sasso Science Institute
Viale Francesco Crispi, 7 - 67100 L’Aquila - Italy
Document style c (cid:13)
Yuriy Zachia Lun1
ROTOCOL FOR A S YSTEMATIC M APPING S TUDY ON
Collaborative Model-Driven Software Engineering A BSTRACT
This document describes the review protocol describing the conduct of a systematic mapping study on collabora-tive model-driven software engineering.K
EYWORDS
Systematic Mapping Study, Collaborative Modeling, Model-driven Engineeringi ontents
List of Figures
List of Tables
Background and rationale
Context.
Collaborative software engineering (CoSE) deals with methods, processes and tools for enhancingcollaboration, communication, and co-ordination (3C) among team members [1]. The importance of CoSE isevidenced in a number of sources [2–4] and recently empowered by the prominence of agile methods, open sourcesoftware projects, and global software development [1]. CoSE is not only about software development teammembers, but it can embrace also external and non-technical stakeholders, like customers and final users, asadvised by current research on Participatory Design methods [5, 6].When focusing on software design, considered to be one of the key aspects of software engineering [7], mul-tiple stakeholders with different technical knowledge and background collaborate on the development of the sys-tem [8]. In this context, shared (abstract) models of system are extremely valuable since they allow each stake-holder to focus on domain-specific concepts and to abstract upon the aspects of the system in which she is moreexpert. A model is “a reduced representation of some system that highlights the properties of interest from a givenviewpoint” [9], and it is a specific design artifact that can be either graphical, XML-based, or textual. Modelshave an unequivocally defined semantics, which allow precise information exchange and many additional usages.Modeling, as opposed to simply drawing, grants a huge set of additional advantages and uses, including: syntac-tical validation, model analysis, model simulation, model transformations, model execution (either through codegeneration or model interpretation), and model debugging [4].Nowadays, collaborative modeling performed by multiple stakeholders is gaining a growing interest in bothacademia and practice [3, 4]. However, it poses a set of research challenges, such as large and complex modelsmanagement, support for multi-user modeling environments, and synchronization mechanisms like models migra-tion and merging, conflicts management, models versioning and rollback support [3]. A body of knowledge inthe scientific literature about collaborative model-driven software engineering (MDSE) exists. Still, those studiesare scattered across different independent research areas, such as software engineering, model-driven engineer-ing languages and systems, model integrated computing, etc., and a study classifying and comparing the variousapproaches and methods for collaborative MDSE is still missing.Under this perspective, a systematic mapping study (SMS) [10] can help researchers and practitioners in (i)having a complete, comprehensive and valid picture of the state of the art about collaborative MDSE, and (ii)identifying potential gaps in current research and future research directions.
Goal of this work.
We are interested in identifying and classifying approaches, methods and techniques thatsupport collaborative MDSE. We focus on those approaches in which several distributed technical and/or non-technical stakeholders collaborate to produce models of a software system, working in a shared environment, eithersynchronously or asynchronously. Stakeholders can include, but are not limited to, technical figures (modelers,designers, developers), domain experts, non-technical managers, customers, and users of the software system. Weare interested in identifying and analyzing the different approaches to support multi-user modeling tasks wherethe design models can be either domain-specific or domain-independent. In any case, studied approaches mustconsider the models as first class elements within the whole design process. Also, studied approaches must providesynchronization mechanisms, e.g. conflicts management/resolution, conflicts avoidance, versioning and rollbacksupport.
Based on our knowledge and after a manual search, we did not find any existing systematic study on the topic.In any case, in the following we report those studies that, even if they have different scopes and objectives, it isinteresting to compare our research in order to better understand: (i) our research focus and (ii) how other studiesare not sufficient to answer the research questions of our study.Table 1 shows the existing systematic studies, their focus and quality assessment. Based on the criteria ex-plained in [11], we calculate the total score of each study by summing up the answer to each specific questionQ1-Q4 (Yes(Y)=1, Partly(P)=0.5, No(N)=0):Q1) Are the systematic study’s inclusion and exclusion criteria described and appropriate?Q2) Is the literature search likely to have covered all relevant studies?13) Did the reviewers assess the quality/validity of the included studies?Q4) Were the basic data/studies adequately described?
Study Year Q1 Q2 Q3 Q4 Totalscore Focus [12] 2011 Y Y Y Y 4.0 Ways of collaboration used inDSD[13] 2012 Y P P Y 3.0 Tools supporting distributedteams in GSE[14] 2008 P Y P P 2.5 Collaborative modeling supportsystemsTable 1: Existing systematic studies on collaborative software engineering or collaborative modelingA systematic literature review on the models of collaboration in the domain of distributed software develop-ment (DSD) is presented in [12]. This study focuses on the models and tools for DSD based on life cycle oftraditional software development (and its variations), where each phase of the cycle is performed . Differently, inour study we focus on the various aspects of the collaboration in the model-driven software engineering domain,where the models are placed as first-class artifacts.In [13] a systematic mapping study is proposed with the objective to discover all the tools available in theliterature supporting global software engineering (GSE) activities. Our work will be more comprehensive becauseit will identify (in addition to tool support) also methods, techniques and approaches to support modeling activitiesin collaborative settings with the focus on design activities. Moreover, in the literature there are other systematicstudies in the DSD and GSE scope [15], but all of them focus on tools and/or approaches to address issues likecollaboration process management, team members awareness or collaboration tools support; there is no existingstudy specifically focusing on collaborative MDSE.Finally, in [14] the authors identify challenges and best practices in collaborative modeling activity, wheremodelers, end-users and experts are all involved in the model-based design of the system, and collaborate to createa shared understanding of the system under development (or part of it). Among others, the main difference betweenthis study and ours is that in [14] collaborative modeling is considered as the joint creation of a shared graphicalrepresentation of a system , i.e., a sketching activity where the created models are used as communication meansbetween team members. This kind of models (possibly conforming to some syntactical rules) are very differentfrom our definition of models, where modeling is a complex activity based on precise models whose semantic isrigorously defined according to a specific modeling language. These models allow precise information exchangebut also many additional usages, including: syntactical validation, model checking, model transformation, codegeneration [4, § The need for a systematic mapping study on collaborative MDSE has been introduced in Section 1. This researchcomplements the existing studies described in Section 1.1 to investigate the state-of-research about collaborativeMDSE (cf. Table 1).So far, a large body of knowledge has been proposed in both modeling software systems (e.g., model-drivenengineering techniques, domain-specific modeling languages, model transformations, etc.), and collaboration forsoftware production (e.g., global software engineering methods, methods for participatory design of softwaresystems, version control systems, etc.). Even if the progress of research on the above mentioned areas has startedmore than a decade ago and the various research communities are still very active, we did not find any evidencethat could help us in assessing the impact of existing research on collaborative MDSE . Thus, in this systematicwork we aim to identify, classify, and understand existing research on collaborative MDSE. Those activities willhelp researchers and practitioners in identifying trends, limitations, and gaps of current research on collaborativeMDSE and its future potential. 2igure 1: Overview of the whole review process
Figure 1 shows the overview of the whole process of our research. The overall process can be divided into threemain phases, which are the classical ones for carrying on a systematic study [10, 16]: planning, conducting, anddocumenting. Each phase can have a number of output artifacts, e.g., the planning phase produces the reviewprotocol described in this document. In order to mitigate potential threats to validity and possible biases, reviewprotocol and final reports artifacts will be circulated to external experts for independent review. More specifically,we identified two classes of external experts: SLR experts and domain experts. SLR experts are contacted forgetting feedback about the proposed review protocol, possible unidentified threats to validity, possible problems inthe overall construction of the review; whereas, domain experts are contacted for getting feedback about whetherthe proposed review protocol and final reports, can be effective with respect to the object of our mapping study(i.e., collaborative MDSE). Table 2 shows the external reviewers we contacted and replied. External reviewers hada time span of ten days to provide their feedback about the proposed artifacts.
SLR experts Domain experts
Muhammad Ali Babar Dimitrios KolovosPatricia LagoTable 2: Contacted external reviewersIn the following we will go through each phase of the process, highlighting its main activities and producedartifacts. 3 .1.1 Planning
Planning is the first step of our review process, and it aims at (i) establishing the need for performing a mappingstudy on collaborative MDSE (see Section 1.2), (ii) identifying the main research questions (see Section 3), and(iii) defining the protocol to be followed by the involved researchers while carrying on each step of the wholereview process (see the remainder of this document). The output of our planning phase is a well-defined reviewprotocol, that actually is this document itself. The produced review protocol will undergo an external evaluationby the previously identified SLR- and domain-experts.
In this phase we will set the previously defined protocol into practice. More specifically, we will perform thefollowing activities: • Search and selection : we will (i) consider the search strings identified in Section 4 and we will apply themto electronic data sources, and (ii) apply the backward and forward snowballing techniques for expandingthe set of considered studies. The output of this activity is a comprehensive list of all the candidate entries;each entry will have a set of metadata associated with it (e.g, title, authors, etc.). Duplicated entries willidentified and merged by matching them by title , authors , year , and venue of publication .Then, the potentially relevant studies identified in the previous activity will be filtered in order to obtain thefinal list of primary studies to be considered in later activities of the protocol. Section 4 will describe indetails the search strategy of this research. • Comparison framework definition : in this activity we will define the set of parameters that will be usedto compare the primary studies. The main outcome of this activity is the data extraction form, which isdesigned to collect the information needed for analyzing the primary studies. The data extraction form willbe designed based on the research questions [10]. • Data extraction : In this activity we will go into the details of each primary study, and we will fill a cor-responding data extraction form, as defined in the previous activity. Filled forms will be collected andaggregated in order to be ready to be analyzed during the next activity. More details about this activity willbe presented in Section 7. • Data synthesis : this activity will focus on a comprehensive summary and analysis of the data extracted in theprevious activity. The main goal of this activity is to elaborate on the extracted data in order to address eachresearch question of our study (see Section 3). This activity will involve both quantitative and qualitativeanalysis of the extracted data. The details about this activity are in Section 8.
This phase is fundamental for reasoning on the obtained findings and for evaluating the quality of the systematicstudy. The main activities performed in this phase are: (i) a thorough elaboration on the data extracted in theprevious phase with the main aim at setting the obtained results in their context from both the academic andpractitioners point of view, (ii) the analysis of possible threats to validity, specially to the ones identified duringthe definition of the review protocol (in this activity also new threats to validity may emerge), and (iii) the writing ofa set of reports describing the performed mapping study to different audiences (see Section 9). Firstly, producedreports will be evaluated by SLR- and domain- experts; secondly, some of them will be submitted to scientificjournals, conferences and magazines for professionals, thus they will undergo also a peer reviewed evaluation bythe community.
Four researchers will carry on this study, because a ’too small’ team size (e.g., single reviewer) may have difficul-ties in controlling potential biases [17]. Each researcher has a specific role within the team; these are identifiedroles: 4
Principle researcher : PhD student with knowledge about model-driven engineering and development; he willperform the majority of activities from planning the study to reporting;-
Secondary researcher : post-doctoral researcher with expertise in both SLR methodologies and model-drivenengineering; he is mainly involved in (i) the planning phase of the study, and (ii) supporting the principleresearcher during the whole study, e.g., by reviewing the data extraction form, selected primary studies, extracteddata, produced reports, etc.;-
MDE expert : senior researcher with a several-years expertise on model-driven engineering methods and tech-niques; he is mainly involved in supporting the principle and secondary researchers with respect to any potentialissue or discussion related to the MDE methodology;-
Advisor : senior researcher with many-years expertise in software engineering and model-based design anddevelopment. He makes final decision on conflicts and options to ’avoid endless discussions’ [17], and supportsother researchers during the data synthesis, findings synthesis, and report writing activities.
Specifying the research questions is a most crucial part of doing systematic mapping study [18]. Before going intothe details of the identified research questions, we formulate the goal of this research by using the Goal-Question-Metric perspectives (i.e., purpose, issue, object, viewpoint [19]). Table 3 shows the result of the above mentionedformulation.
Purpose
Identify, classify, and understand
Issue the publication trends, characteristics, and challenges
Object of existing collaborative MDSE approaches
Viewpoint from a researcher’s viewpoint.Table 3: Goal of this researchIn the following we present the research questions we translated from the above mentioned overall goal. Foreach research question we also provide its primary objective of investigation. The research questions are:-
RQ1 : What are the characteristics of collaborative MDSE approaches?
This research question has been decom-posed into more detailed sub-questions in order for it to be addressed. Those sub-questions come from the threedimensions of collaborative MDSE; we also added a specific sub-question for investigating how collaborativeMDSE approaches are integrated with software engineering activities. – RQ1.1 : What are the characteristics of the model management infrastructure of existing collaborativeMDSE approaches? – RQ1.2 : What are the characteristics of the collaboration means of existing collaborative MDSE ap-proaches? – RQ1.3 : What are the characteristics of the communication means of existing collaborative MDSE ap-proaches?
Objective: to identify and classify existing collaborative MDSE approaches according to the three dimensionsof collaborative MDSE.Outcome: a map that classifies a set of collaborative MDSE approaches based on different categories (e.g.,characteristics of collaborative model editing environments, model versioning mechanisms, model repositories,support for communication and decision making, etc.).5
RQ2 - What are the challenges of existing collaborative MDSE approaches?
Objective: to identify current limitations and challenges with respect to the state of the art in collaborativeMDSE.Outcome: a map that classifies collaborative MDSE approaches with respect to their limitations, faced chal-lenges, and future work.-
RQ3 : What are the publication trends about collaborative MDSE approaches over time?
Objective: to identify and classify the interest of researchers in collaborative MDSE approaches and their variouscharacteristics over time.Outcome: a map that classifies the collected primary studies according to publication year, venue, appliedresearch strategies, etc. Also, the map will classify collected primary studies according to their focus on thevarious characteristics of collaborative MDSE approaches over time.The classification resulting from our investigation on RQ1, RQ2, and RQ3 will provide a solid foundation fora thorough identification and comparison of existing and future solutions for supporting collaborative MDSE. Thiscontribution is useful for both researchers and practitioners willing to further contribute with new collaborativeMDSE approaches, or willing to better understand or refine existing ones.The above listed research questions will drive the whole systematic mapping study methodology, with a specialinfluence on the primary studies search process, the data extraction process, and the data analysis process.
The success of any systematic study is deeply rooted in the retrieval of a set of primary studies which are relevantand representative enough of the topic being considered [16]. More specifically, it is fundamental to achieve a goodtrade-off between the coverage of existing research on the topic considered, and to have a manageable number ofstudies to be analyzed. In order to achieve the above mentioned trade-off, as specified in [20], an optimum searchstrategy for a SLR should answer the following main questions:1.
Which approach to be used in search process?2.
Where to search, and which part of article should be searched?3.
What to be searched, and what are queries fed into search engines?4.
When is the search carried out, and what time span to be searched?
Our search strategy consists of two main steps: (i) an automatic search on Electronic Data Sources (EDS) and(ii) a snowballing procedure. During the first step, following the guidelines in [16], we composed the searchstring based on identified keywords from research questions and area of study. The search strings are used toretrieve a set of potential primary studies through web search engines provided by digital libraries. The guidelines[16] recommend to use other complementary searches, necessary to extend the coverage on the topic. For thispurpose, we applied a snowballing procedure on the results of the automatic search. Snowballing refers to usingthe reference list of a paper (backward snowballing) or the citations to the paper (forward snowballing) to identifyadditional papers [21]. The start set for the snowballing procedure is composed by the selected papers retrievedby automatic search, namely the primary studies selected applying inclusion/exclusion criteria to the automaticsearch results. In any case, inclusion/exclusion criteria (see Section 5) will be applied to each paper and, if thepaper can be included, snowballing will be applied iteratively. The procedure ends when no new papers are found.6 .2 Where to search?
According to [16], it is important to search many different electronic sources, because no single source is ableto find all relevant primary studies. Table 4 shows the electronic databases we will use for our study. These areconsidered the main sources of literature for potentially relevant studies on software engineering [22]. Also, theseEDSs have been selected from the recommendations made by experts in the area of software engineering . Library Website
ACM Digital Library http://dl.acm.org
IEEE Xplore Digital Library http://ieeexplore.ieee.org
Web of Science http://apps.webofknowledge.com
ScienceDirect
SpringerLink http://link.springer.com
Wiley Online Library http://onlinelibrary.wiley.com/
Table 4: Electronic data sources targeted with search strings
A suitable search string will be the input to the electronic data sources identified in the previous section, matchingwith paper titles, abstracts, and keywords. According to the guidelines provided in [16], we will use the followingsystematic strategy for constructing our search string:1. derive major aspects relevant to the study, according to the research questions and to a set of known relevantpapers pilots . Table 5 shows the considered pilots. Each aspect is represented by a ”cluster” that groups aset of terms; identified clusters are collaborative and
MDSE ;2. add keywords (main terms) to each cluster obtained from known primary studies and research questions;3. identify and include in a cluster synonyms and related terms of the main terms;4. incorporate alternative spellings and synonyms using Boolean OR ;5. link the cluster keywords using Boolean AND ; Authors Title Year
Mar´oti et al. [23] Next Generation (Meta) Modeling: Web-and Cloud-based Collaborative Tool Infrastructure 2014Syriani et al. [24] AToMPM: A Web-based Modeling Environment 2013Farwick et al. [25] A web-based collaborative metamodeling environmentwith secure remote model access 2010Thum et al. [26] SLIM - A Lightweight Environment for SynchronousCollaborative Modeling 2009Cataldo et al. [27] CAMEL: a tool for collaborative distributed software de-sign 2009Bruegge et al. [28] Unicase- an Ecosystem for Unified Software EngineeringResearch Tools 2008De Lucia et al. [29] Enhancing collaborative synchronous UML modellingwith fine-grained versioning of software artefacts 2007Kelly et al. [30] Metaedit+: a fully configurable multi-user and multi-toolcase and came environment 1996Table 5: PilotsFollowing this strategy, after a series of test executions and refinements the resulting search string is shown inthe listing below. We do not use Google Scholar since it may generate many irrelevant results and have considerable overlap with ACM and IEEE onsoftware engineering literature [22]. However, we will use Google Scholar in the forward snowballing procedure [21] ( c o l l a b o r a t ∗ OR c o o r d i n a t ∗ OR c o o p e r a t ∗ OR concur ∗ OR g l o b a l ) AND (MDE OR MDD OR MDA OR MDS ∗ OR EMF OR DSL OR DSML OR ”model d r i v e n ” OR ” e c l i p s e modelingframework ” OR ”domain s p e c i f i c language ” OR ”domain s p e c i f i c modeling language ”) Listing 1: Query string used for automatic studies searchEach electronic data source has a specific syntax for search strings, so we adapted our generic search string tothe specific syntax and criteria of each electronic data source.
We will include in our search all the studies coming from the selection step avoiding publication year constraints,so we will not consider publication year as criterion for the search and selection steps.
As suggested in [16], we decided the selection criteria of this study during its protocol definition, so to reduce thelikelihood of bias. In the following we provide inclusion and exclusion criteria of our study. In this context, astudy will be selected as a primary study if it will satisfy all inclusion criteria, and it will be discarded if it willmet any exclusion criterion.
I1) Studies proposing an MDSE method or technique for supporting the collaborative work of multiple stakehold-ers on modelsI2) Studies in which models are the primary artifacts within the collaboration process.I3) Studies providing some kind of validation or evaluation of the proposed method or technique (e.g., via a casestudy, a survey, experiment, exploitation in industry, formal analysis, example usage).I4) Studies subject to peer review [10] (e.g., journal papers, papers published as part of conference proceedingswill be considered, whereas white papers will be discarded).I5) Studies written in English language and available in full-text.
E1) Studies discussing only business processes and collaboration practices, without proposing a specific methodor technique.E2) Secondary studies (e.g., systematic literature reviews, surveys, etc.).E3) Studies in the form of tutorial papers, long abstract papers, poster papers, editorials, because they do notprovide enough information.It is important to note that, even if secondary studies will be excluded (see the E4 exclusion criterion), we willconsidered them in our study as follows:- for checking the completeness of our set of primary studies (i.e., if any relevant paper will be missing from ourstudy);- for providing a summary of what is already known about collaborative MDSE;- for identifying any important issues to be considered in our study;- for defining what is the contribution of our study to the literature.The definition of the above mentioned criteria has been tested by considering the pilot studies defined inSection 4; the criteria have been incrementally refined until they were covering all the pilot studies.8
Selection procedure
As suggested by [10], two researchers will assess a random sample of the studies, then the inter-researcher agree-ment will be measured using the Cohen Kappa statistic and reported as a quality assessment of this stage in thefinal report. To be successful, the result of the Cohen Kappa statistic must be above or equal to . , otherwiseeach disagreement must be discussed and resolved, with the intervention of the team administrator, if necessary.Moreover, if a primary study is published in more than one paper (for example, if a conference paper isextended to a journal version), only one instance will be counted as a primary study. Mostly, the journal versionwill be preferred, as it is most complete, but both versions will be used in the data extraction phase [10]. Moreover,if we will have blocking issues in extracting relevant data, supporting technical reports or communication withauthors may also serve as data sources for the extraction [10]. The goal of this step is to identify and collect from the selected primary studies the appropriate and relevantinformation to answer our research questions (see section 3). To achieve this goal, we will define a rigorouscomparison framework to store the extracted data in a structured manner; the classification framework will becomposed of a list of attributes representing the set of data items extracted from each primary study.The creation of an effective classification framework demands a detailed analysis of the contents of eachprimary study. In light of this, we will follow a systematic process called keywording [31] for defining our clas-sification framework. Basically, keywording aims at reducing the time needed in developing the classificationframework and ensuring that it will take all the primary studies into account [31].Figure 2: Overview of the keywording processFigure 2 shows the keywording process we will follow. Keywording is done in two steps:1.
Collect keywords and concepts : researchers collect keywords and concepts by reading each primary study.When all primary studies have been analyzed, all keywords and concepts are combined together to clearlyidentify the context, nature, and contribution of the research. The output of this stage is the set of keywordsextracted from the primary studies.2.
Cluster keywords and concepts : when keywords and concepts have been finalized, then researchers canperform a clustering operation on them in order to have a set of representative clusters of keywords. Theoutput of this stage is the finalized classification framework containing all the identified attributes, each ofthem representing a specific aspect regarding collaborative MDSE.Depending on the resulting classification framework, researchers can decide to extend it with additional at-tributes that may be of interest in the context of this research. Unless clearly motivated and documented in thefinal report, attributes cannot be removed from the classification framework.9n order to have a rigorous data extraction process and to ease the management of the extracted data, a struc-tured data extraction form will be designed. Once the data extraction form will be set up, the principle researcherwill consider each primary study and will fill the data extraction form accordingly. In order to validate our dataextraction strategy, we will perform a sensitivity analysis to analyze whether the results are consistent indepen-dently from the researcher performing the analysis [10]. More specifically, we will get a random sample of 10primary studies and both the principle and secondary researchers will classify them independently, by filling thedata extraction form for each study. Then, the Cohen Kappa statistic will be applied to the obtained results toassess the level of agreement among the researchers. The value of the obtained Cohen Kappa statistics will bedocumented in the final report, and must be above or equal to . . If the result of the Cohen Kappa statistic willbe below . , each disagreement must be discussed and resolved, with the intervention of the team advisor, ifnecessary. The data synthesis activity involves collating and summarizing the data extracted from the primary studies [16, § Vertical analysis . We will analyze the extracted data to find trends and collect information about each researchquestion of our study. Depending on the parameters of the classification framework (see Section 7), in this researchwe will apply both quantitative and qualitative synthesis methods, separately. When considering quantitative data,we will firstly verify if synthesized studies are homogeneously distributed in order to perform meta-analysis.Then, depending on the specific data to be analyzed, we will perform a specific kind of meta-analysis. Whenconsidering qualitative data, we will apply the line of argument synthesis [10], that is: firstly we will analyzeprimary studies individually in order to document each of them and tabulate their main features with respect toeach specific parameter of the classification framework defined in Section 7, then we will analyze the set of studiesas a whole, in order to reason on potential patterns and trends. When both quantitative and qualitative analyses willbe performed, we will integrate their results in order to explain quantitative results by using qualitative results [16, § Horizontal analysis . In this phase we will analyze the extracted data to explore possible relations across differentquestions and facets of our research. We will cross-tabulate and group the data, and make comparisons betweentwo or more nominal variables. The main goal of the horizontal analysis is to (i) investigate on the existence ofpossible interesting relations between data pertaining to different facets of our research. In this context, we willuse cross-tabulation as strategy for evaluating the actual existence of those relations.
We are planning to report our systematic study to different audiences. In the following we list the actions we willundertake in our dissemination strategy:1. we will report our main research-oriented findings and a detailed description of this study into an academicpublication in a top-level academic journal. Possible targets are: Information and Software Technologyjournal (IST), Transactions on Software Engineering (TSE), or ACM Computing Surveys (CSUR);2. as suggested in [10], an accompanying technical report will be published on-line; the technical report willpresent this protocol, the complete lists of included and excluded primary studies, and raw data of this study;the chief aim of the technical report is to make our study replicable by interested researchers;3. depending on the nature of the results, we will also target a practitioners-oriented magazine , with the goalof (hopefully) impacting and enhancing the current state of the practice of collaborative MDSE. http://csur.acm.org/ eferences [1] I. Mistrk, J. Grundy, A. van der Hoek, J. Whitehead, Collaborative software engineering: Challenges andprospects, in: I. Mistrk, J. Grundy, A. Hoek, J. Whitehead (Eds.), Collaborative Software Engineering,Springer Berlin Heidelberg, 2010, pp. 389–403. doi:10.1007/978-3-642-10294-3\_19 .URL http://dx.doi.org/10.1007/978-3-642-10294-3_19 [2] I. Mistrk, J. Grundy, A. Hoek, J. Whitehead (Eds.), Collaborative Software Engineering, Springer BerlinHeidelberg, 2010.[3] D. S. Kolovos, L. M. Rose, N. Matragkas, R. F. Paige, E. Guerra, J. S. Cuadrado, J. De Lara, I. R´ath,D. Varr´o, M. Tisi, J. Cabot, A research roadmap towards achieving scalability in model driven engineering,in: Proceedings of the Workshop on Scalability in Model Driven Engineering, BigMDE ’13, ACM, NewYork, NY, USA, 2013, pp. 2:1–2:10. doi:10.1145/2487766.2487768 .URL http://doi.acm.org/10.1145/2487766.2487768 [4] M. Brambilla, J. Cabot, M. Wimmer, Model-driven software engineering in practice, Vol. 1, Morgan &Claypool Publishers, 2012.[5] D. Schuler, A. Namioka (Eds.), Participatory Design: Principles and Practices, L. Erlbaum Associates Inc.,Hillsdale, NJ, USA, 1993.[6] K. Vredenburg, J.-Y. Mao, P. W. Smith, T. Carey, A survey of user-centered design practice, in: Proceedingsof the SIGCHI Conference on Human Factors in Computing Systems, CHI ’02, ACM, New York, NY, USA,2002, pp. 471–478. doi:10.1145/503376.503460 .URL http://doi.acm.org/10.1145/503376.503460 [7] B. Bruegge, A. H. Dutoit, Object-Oriented Software Engineering Using UML, Patterns and Java-(Required),Prentice Hall, 2004.[8] P. J. Denning, Design thinking, Commun. ACM 56 (12) (2013) 29–31. doi:10.1145/2535915 .URL http://doi.acm.org/10.1145/2535915 [9] B. Selic, The pragmatics of model-driven development, IEEE Softw. 20 (5) (2003) 19–25. doi:10.1109/MS.2003.1231146 .URL http://dx.doi.org/10.1109/MS.2003.1231146 [10] C. Wohlin, P. Runeson, M. H¨ost, M. Ohlsson, B. Regnell, A. Wessl´en, Experimentation in Software Engi-neering, Computer Science, Springer, 2012.URL http://books.google.it/books?id=QPVsM1_U8nkC [11] Systematic literature reviews in software engineering A tertiary study, Information and Software Technology52 (8) (2010) 792 – 805. doi:{http://dx.doi.org/10.1016/j.infsof.2010.03.006} .[12] R. G. C. Rocha, C. Costa, C. M. de Oliveira Rodrigues, R. R. de Azevedo, I. H. de Farias Junior, S. R.de Lemos Meira, R. Prikladnicki, Collaboration models in distributed software development: a systematicreview., CLEI Electron. J. 14 (2).[13] J. , A. Vizcano, M. Piattini, S. Beecham, Tools used in global software engineering: A systematic mappingreview, Information and Software Technology 54 (7) (2012) 663 – 685. doi:http://dx.doi.org/10.1016/j.infsof.2012.02.006 .URL [14] M. Renger, G. L. Kolfschoten, G.-J. De Vreede, Challenges in collaborative modelling: a literature reviewand research agenda 4 (3) 248–263.URL http://dx.doi.org/10.1504/IJSPM.2008.023686 doi:10.1109/ICGSE.2012.29 .[16] B. A. Kitchenham, S. Charters, Guidelines for performing systematic literature reviews in software engineer-ing (2007).[17] H. Zhang, M. A. Babar, Systematic reviews in software engineering: An empirical investigation, Informationand Software Technology 55 (7) (2013) 1341 1354.[18] P. Brereton, B. A. Kitchenham, D. Budgen, M. Turner, M. Khalil, Lessons from applying the systematicliterature review process within the software engineering domain, Journal of Systems and Software 80 (4)(2007) 571 – 583, Software Performance 5th International Workshop on Software and Performance.[19] V. R. Basili, G. Caldiera, H. D. Rombach, The goal question metric approach, in: Encyclopedia of SoftwareEngineering, Wiley, 1994.[20] H. Zhang, M. A. Babar, P. Tell, Identifying relevant studies in software engineering, Inf. Softw. Technol.53 (6) (2011) 625–637. doi:10.1016/j.infsof.2010.12.010 .URL http://dx.doi.org/10.1016/j.infsof.2010.12.010 [21] C. Wohlin, Guidelines for snowballing in systematic literature studies and a replication in software engineer-ing, in: 18th International Conference on Evaluation and Assessment in Software Engineering, EASE ’14,London, England, United Kingdom, May 13-14, 2014, 2014, p. 38. doi:10.1145/2601248.2601268 .URL http://doi.acm.org/10.1145/2601248.2601268 [22] L. Chen, M. A. Babar, H. Zhang, Towards an evidence-based understanding of electronic data sources, in:Proceedings of the 14th International Conference on Evaluation and Assessment in Software Engineering,EASE’10, British Computer Society, Swinton, UK, UK, 2010, pp. 135–138.URL http://dl.acm.org/citation.cfm?id=2227057.2227074 [23] M. Mar´oti, T. Kecsk´es, R. Keresk´enyi, B. Broll, P. V¨olgyesi, L. Jur´acz, T. Levendoszky, ´A. L´edeczi, Nextgeneration (meta) modeling: Web-and cloud-based collaborative tool infrastructure, Proceedings of MPM(2014) 41.[24] E. Syriani, H. Vangheluwe, R. Mannadiar, C. Hansen, S. Van Mierlo, H. Ergin, Atompm: A web-basedmodeling environment., in: Demos/Posters/StudentResearch@ MoDELS, Citeseer, 2013, pp. 21–25.[25] M. Farwick, B. Agreiter, J. White, S. Forster, N. Lanzanasto, R. Breu, A web-based collaborative metamod-eling environment with secure remote model access, Springer, 2010.[26] C. Thum, M. Schwind, M. Schader, Slima lightweight environment for synchronous collaborative modeling,in: Model Driven Engineering Languages and Systems, Springer, 2009, pp. 137–151.[27] M. Cataldo, C. Shelton, Y. Choi, Y.-Y. Huang, V. Ramesh, D. Saini, L.-Y. Wang, Camel: a tool for col-laborative distributed software design, in: Global Software Engineering, 2009. ICGSE 2009. Fourth IEEEInternational Conference on, IEEE, 2009, pp. 83–92.[28] B. Bruegge, O. Creighton, J. Helming, M. K¨ogel, Unicase–an ecosystem for unified software engineeringresearch tools, in: Third IEEE International Conference on Global Software Engineering, ICGSE, Vol. 2008,Citeseer, 2007.[29] A. De Lucia, F. Fasano, G. Scanniello, G. Tortora, Enhancing collaborative synchronous uml modellingwith fine-grained versioning of software artefacts, Journal of Visual Languages & Computing 18 (5) (2007)492–503.[30] S. Kelly, K. Lyytinen, M. Rossi, Metaedit+ a fully configurable multi-user and multi-tool case and cameenvironment, in: Advanced Information Systems Engineering, Springer, 1996, pp. 1–21.1231] K. Petersen, R. Feldt, S. Mujtaba, M. Mattsson, Systematic mapping studies in software engineering, in:Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering,EASE’08, British Computer Society, Swinton, UK, UK, 2008, pp. 68–77.URL http://dl.acm.org/citation.cfm?id=2227115.2227123http://dl.acm.org/citation.cfm?id=2227115.2227123