Emilia Mendes
Blekinge Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emilia Mendes.
IEEE Transactions on Software Engineering | 2007
Barbara A. Kitchenham; Emilia Mendes; Guilherme Horta Travassos
The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e.,
Empirical Software Engineering | 2003
Emilia Mendes; Ian D. Watson; C.M. Triggs; Nile Mosley; Steve Counsell
20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3.
empirical software engineering and measurement | 2009
Mehwish Riaz; Emilia Mendes; Ewan D. Tempero
Software cost models and effort estimates help project managers allocate resources, control costs and schedule and improve current practices, leading to projects finished on time and within budget. In the context of Web development, these issues are also crucial, and very challenging given that Web projects have short schedules and very fluidic scope. In the context of Web engineering, few studies have compared the accuracy of different types of cost estimation techniques with emphasis placed on linear and stepwise regressions, and case-based reasoning (CBR). To date only one type of CBR technique has been employed in Web engineering. We believe results obtained from that study may have been biased, given that other CBR techniques can also be used for effort prediction. Consequently, the first objective of this study is to compare the prediction accuracy of three CBR techniques to estimate the effort to develop Web hypermedia applications and to choose the one with the best estimates. The second objective is to compare the prediction accuracy of the best CBR technique against two commonly used prediction models, namely stepwise regression and regression trees. One dataset was used in the estimation process and the results showed that the best predictions were obtained for stepwise regression.
IEEE MultiMedia | 2001
Emilia Mendes; Nile Mosley; Steve Counsell
This paper presents the results of a systematic review conducted to collect evidence on software maintainability prediction and metrics. The study was targeted at the software quality attribute of maintainability as opposed to the process of software maintenance. The evidence was gathered from the selected studies against a set of meaningful and focused questions. 710 studies were initially retrieved; however of these only 15 studies were selected; their quality was assessed; data extraction was performed; and data was synthesized against the research questions. Our results suggest that there is little evidence on the effectiveness of software maintainability prediction techniques and models.
ieee international software metrics symposium | 2004
Emilia Mendes; Barbara A. Kitchenham
Like any software process, Web application development would benefit from early-stage effort estimates. Using an undergraduate university course as a case study, we collected metrics corresponding to Web applications, developers and tools. Then we used those metrics to generate models for predicting design and authoring effort for future Web applications.
IEEE Transactions on Software Engineering | 2004
Barbara A. Kitchenham; Emilia Mendes
This paper extends a previous study, using data on 67 Web projects from the Tukutuku database, investigating to what extent a cross-company cost model can be successfully employed to estimate effort for projects that belong to a single company, where no projects from this company were used to build the cross-company model. Our within-company model employed data on 14 Web projects from a single Web company. Our results were similar to those from the previous study, showing that predictions based on the within-company model were significantly more accurate than those based on the cross-company model. We also found that predictions were very poor when the within-company cost model was used to estimate effort for 53 Web projects from different companies. We analysed the data using two techniques, forward stepwise regression and case-based reasoning. We found estimates produced using stepwise regression models were better for the within company model while case-based reasoning predictions were better for the cross-company model.
quality of information and communications technology | 2007
Angelica Caro; Coral Calero; Emilia Mendes; Mario Piattini
Productivity measures based on a simple ratio of product size to project effort assume that size can be determined as a single measure. If there are many possible size measures in a data set and no obvious model for aggregating the measures into a single measure, we propose using the expression AdjustedSize/Effort to measure productivity. AdjustedSize is defined as the most appropriate regression-based effort estimation model, where all the size measures selected for inclusion in the estimation model have a regression parameter significantly different from zero (p<0.05). This productivity measurement method ensures that each project has an expected productivity value of one. Values between zero and one indicate lower than expected productivity, values greater than one indicate higher than expected productivity. We discuss the assumptions underlying this productivity measurement method and present an example of its use for Web application projects. We also explain the relationship between effort prediction models and productivity models.
IEEE Transactions on Software Engineering | 2008
Emilia Mendes; Nile Mosley
Advances in technology and the use of the Internet have favoured the emergence of a large number of Web applications, including Web Portals. Web portals provide the means to obtain a large amount of information therefore it is crucial that the information provided is of high quality. In recent years, several research projects have investigated Web Data Quality; however none has focused on data quality within the context of Web Portals. Therefore, the contribution of this research is to provide a framework centred on the point of view of data consumers, and that uses a probabilistic approach for Web portals data quality evaluation. This paper shows the definition of operational model, based in our previous work.The amount of effort needed to maintain a software system is related to the technical quality of the source code of that system. The ISO 9126 model for software product quality recognizes maintainability as one of the 6 main characteristics of software product quality, with adaptability, changeability, stability, and testability as subcharacteristics of maintainability. Remarkably, ISO 9126 does not provide a consensual set of measures for estimating maintainability on the basis of a systems source code. On the other hand, the maintainability index has been proposed to calculate a single number that expresses the maintainability of a system. In this paper, we discuss several problems with the MI, and we identify a number of requirements to be fulfilled by a maintainability model to be usable in practice. We sketch a new maintainability model that alleviates most of these problems, and we discuss our experiences with using such as system for IT management consultancy activities.
ieee international software metrics symposium | 2002
Emilia Mendes; Ian D. Watson; C.M. Triggs; Nile Mosley; Steve Counsell
The objective of this paper is to compare, using a cross-company dataset, several Bayesian network (BN) models for Web effort estimation. Eight BNs were built; four automatically using Hugin and PowerSoft tools with two training sets, each with 130 Web projects from the Tukutuku database; four using a causal graph elicited by a domain expert, with parameters automatically fit using the same training sets used in the automated elicitation (hybrid models). Their accuracy was measured using two validation sets, each containing data on 65 projects, and point estimates. As a benchmark, the BN-based estimates were also compared to estimates obtained using manual stepwise regression (MSWR), case-based reasoning (CBR), mean- and median-based effort models. MSWR presented significantly better predictions than any of the BN models built herein, and in addition was the only technique to provide significantly superior predictions to a median-based effort model. This paper investigated data-driven and hybrid BN models using project data from the Tukutuku database. Our results suggest that the use of simpler models, such as the median effort, can outperform more complex models, such as BNs. In addition, MSWR seemed to be the only effective technique for Web effort estimation.
international symposium on empirical software engineering | 2005
Emilia Mendes
Several studies have compared the prediction accuracy of different types of techniques with emphasis placed on linear and stepwise regressions, and case-based reasoning (CBR). We believe the use of only one type of CBR technique may bias the results, as there are others that can also be used for effort prediction. This paper has two objectives. The first is to compare the prediction accuracy of three CBR techniques to estimate the effort to develop Web hypermedia applications. The second objective is to compare the prediction accuracy of the best CBR technique, according to our findings, against three commonly used prediction models, namely multiple linear regression, stepwise regression and regression trees. One dataset was used in the estimation process and the results showed that different measures of prediction accuracy gave different results. MMRE and MdMRE showed better prediction accuracy for multiple regression models whereas box plots showed better accuracy for CBR.