Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nile Mosley is active.

Publication


Featured researches published by Nile Mosley.


Empirical Software Engineering | 2003

A Comparative Study of Cost Estimation Models for Web Hypermedia Applications

Emilia Mendes; Ian D. Watson; C.M. Triggs; Nile Mosley; Steve Counsell

Software cost models and effort estimates help project managers allocate resources, control costs and schedule and improve current practices, leading to projects finished on time and within budget. In the context of Web development, these issues are also crucial, and very challenging given that Web projects have short schedules and very fluidic scope. In the context of Web engineering, few studies have compared the accuracy of different types of cost estimation techniques with emphasis placed on linear and stepwise regressions, and case-based reasoning (CBR). To date only one type of CBR technique has been employed in Web engineering. We believe results obtained from that study may have been biased, given that other CBR techniques can also be used for effort prediction. Consequently, the first objective of this study is to compare the prediction accuracy of three CBR techniques to estimate the effort to develop Web hypermedia applications and to choose the one with the best estimates. The second objective is to compare the prediction accuracy of the best CBR technique against two commonly used prediction models, namely stepwise regression and regression trees. One dataset was used in the estimation process and the results showed that the best predictions were obtained for stepwise regression.


IEEE MultiMedia | 2001

Web metrics - estimating design and authoring effort

Emilia Mendes; Nile Mosley; Steve Counsell

Like any software process, Web application development would benefit from early-stage effort estimates. Using an undergraduate university course as a case study, we collected metrics corresponding to Web applications, developers and tools. Then we used those metrics to generate models for predicting design and authoring effort for future Web applications.


IEEE Transactions on Software Engineering | 2008

Bayesian Network Models for Web Effort Prediction: A Comparative Study

Emilia Mendes; Nile Mosley

The objective of this paper is to compare, using a cross-company dataset, several Bayesian network (BN) models for Web effort estimation. Eight BNs were built; four automatically using Hugin and PowerSoft tools with two training sets, each with 130 Web projects from the Tukutuku database; four using a causal graph elicited by a domain expert, with parameters automatically fit using the same training sets used in the automated elicitation (hybrid models). Their accuracy was measured using two validation sets, each containing data on 65 projects, and point estimates. As a benchmark, the BN-based estimates were also compared to estimates obtained using manual stepwise regression (MSWR), case-based reasoning (CBR), mean- and median-based effort models. MSWR presented significantly better predictions than any of the BN models built herein, and in addition was the only technique to provide significantly superior predictions to a median-based effort model. This paper investigated data-driven and hybrid BN models using project data from the Tukutuku database. Our results suggest that the use of simpler models, such as the median effort, can outperform more complex models, such as BNs. In addition, MSWR seemed to be the only effective technique for Web effort estimation.


ieee international software metrics symposium | 2002

A comparison of development effort estimation techniques for Web hypermedia applications

Emilia Mendes; Ian D. Watson; C.M. Triggs; Nile Mosley; Steve Counsell

Several studies have compared the prediction accuracy of different types of techniques with emphasis placed on linear and stepwise regressions, and case-based reasoning (CBR). We believe the use of only one type of CBR technique may bias the results, as there are others that can also be used for effort prediction. This paper has two objectives. The first is to compare the prediction accuracy of three CBR techniques to estimate the effort to develop Web hypermedia applications. The second objective is to compare the prediction accuracy of the best CBR technique, according to our findings, against three commonly used prediction models, namely multiple linear regression, stepwise regression and regression trees. One dataset was used in the estimation process and the results showed that different measures of prediction accuracy gave different results. MMRE and MdMRE showed better prediction accuracy for multiple regression models whereas box plots showed better accuracy for CBR.


IEE Proceedings - Software | 2002

Comparison of Web size measures for predicting Web design and authoring effort

Emilia Mendes; Nile Mosley; Steve Counsell

Software practitioners recognise the importance of realistic estimates of effort for the successful management of software projects, the Web being no exception. Estimates are necessary throughout the whole development life cycle. They are fundamental when bidding for a contract or when determining a projects feasibility in terms of cost-benefit analysis. In addition, they allow project managers and development organisations to manage resources effectively. Size, which can be described in terms of length, functionality and complexity, is often a major determinant of effort. Most effort prediction models to date concentrate on functional measures of size, although length and complexity are also essential aspects of size. A case study evaluation is described, in which size metrics characterising length, complexity and functionality are obtained and used to generate effort prediction models for Web authoring and design. The comparison of these size metrics as effort predictors is described by generating corresponding prediction models, and their accuracy is compared using boxplots of the residuals. Results suggest that in general all categories present a similar prediction accuracy.


Web Engineering | 2006

The Need for Web Engineering: An Introduction

Emilia Mendes; Nile Mosley; Steve Counsell

The objective of this chapter is three-fold. First, it provides an overview of differences between Web and software development with respect to their development processes, technologies, quality factors, and measures. Second, it provides definitions for terms used throughout the book. Third, it discusses the need for empirical investigations in Web engineering and presents the three main types of empirical investigations — surveys, case studies, and formal experiments.


ieee international software metrics symposium | 2003

Early Web size measures and effort prediction for Web costimation

Emilia Mendes; Nile Mosley; Steve Counsell

Size measures for Web costimation proposed in the literature are invariably related to implemented Web applications. Even when targeted at measuring functionality based on function point analysis, researchers only considered the final Web application, rather than requirements documentation generated using any existing Web development methods. This makes their usefulness as early effort predictors questionable. In addition, it is believed that company-specific data provide a better basis for accurate estimates. Many software engineering researchers have compared the accuracy of company-specific data with multiorganisation databases. However the datasets employed were comprised of data from conventional applications. To date no similar comparison has been adopted for Web project datasets. It has two objectives: The first is to present a survey where early size measures for Web costimation were identified using data collected from 133 Web companies worldwide. All companies included in the survey used Web forms to give quotes on Web development projects, based on gathered size measures. The second is to compare the prediction accuracy of a Web company-specific data with data from a multiorganisation database. Both datasets were obtained via Web forms, used as part of a research project called Tukutuku. Our results show that best predictions were obtained for company-specific dataset, for the two estimation techniques employed.


computer software and applications conference | 2002

The application of case-based reasoning to early Web project cost estimation

Emilia Mendes; Nile Mosley; Steve Counsell

Literature shows that over the years numerous techniques for estimating development effort have been suggested, derived from late project measures. However, to the successful management of software projects, estimates are necessary throughout the whole development life cycle. The objective is twofold. First, we describe the application of case-based reasoning (CBR) for estimating Web hypermedia development effort using measures collected at different stages in the development cycle. Second, we compare the prediction accuracy of those measures, obtained using different CBR configurations. Contrary to the expected, late measures did not show statistically significant better predictions than early measures.


international symposium on empirical software engineering | 2002

Further investigation into the use of CBR and stepwise regression to predict development effort for Web hypermedia applications

Emilia Mendes; Nile Mosley

To date studies using CBR for Web hypermedia effort prediction have not applied adaptation rules to adjust effort according to a given criterion. In addition, when applying n-fold cross-validation, their analysis has been limited to a maximum of three training sets, which according to recent studies, may lead to untrustworthy results. This paper has therefore two objectives. The first is to further investigate the use of CBR for Web hypermedia effort prediction by comparing the prediction accuracy of eight CBR techniques, of which three have previously been compared. The second objective is to compare the prediction accuracy of the best CBR technique against stepwise regression, using a twenty-fold cross-validation. All prediction accuracies were measured using Mean Magnitude of Relative Error (MMRE), Median Magnitude of Relative Error, Prediction at level 1 (1=25%), and boxplots of the residuals. One dataset was used in the estimation process and, according to all measures of prediction accuracy, stepwise regression showed the best prediction accuracy.


international world wide web conferences | 2002

A comparison of case-based reasoning approaches

Emilia Mendes; Nile Mosley; Ian D. Watson

Over the years software engineering researchers have suggested numerous techniques for estimating development effort. These techniques have been classified mainly as algorithmic, machine learning and expert judgement. Several studies have compared the prediction accuracy of those techniques, with emphasis placed on linear regression, stepwise regression, and Case-based Reasoning (CBR). To date no converging results have been obtained and we believe they may be influenced by the use of the same CBR configuration.The objective of this paper is twofold. First, to describe the application of case-based reasoning for estimating the effort for developing Web hypermedia applications. Second, comparing the prediction accuracy of different CBR configurations, using two Web hypermedia datasets.Results show that for both datasets the best estimations were obtained with weighted Euclidean distance, using either one analogy (dataset 1) or 3 analogies (dataset 2). We suggest therefore that case-based reasoning is a candidate technique for effort estimation and, with the aid of an automated environment, can be applied to Web hypermedia development effort prediction.

Collaboration


Dive into the Nile Mosley's collaboration.

Top Co-Authors

Avatar

Emilia Mendes

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steve Counsell

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Steve Counsell

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C.M. Triggs

University of Auckland

View shared research outputs
Top Co-Authors

Avatar

Carmel Pollino

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge