Stephen G. MacDonell
University of Otago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen G. MacDonell.
Information & Software Technology | 1997
Andrew Gray; Stephen G. MacDonell
The use of regression analysis to derive predictive equations for software metrics has recently been complemented by increasing numbers of studies using non-traditional methods, such as neural networks, fuzzy logic models, case-based reasoning systems, and regression trees. There has also been an increasing level of sophistication in the regression-based techniques used, including robust regression methods, factor analysis, and more effective validation procedures. This paper examines the implications of using these methods and provides some recommendations as to when they may be appropriate. A comparison of the various techniques is also made in terms of their modelling capabilities with specific reference to software metrics.
ACM Computing Surveys | 2011
Laurie McLeod; Stephen G. MacDonell
Determining the factors that have an influence on software systems development and deployment project outcomes has been the focus of extensive and ongoing research for more than 30 years. We provide here a survey of the research literature that has addressed this topic in the period 1996–2006, with a particular focus on empirical analyses. On the basis of this survey we present a new classification framework that represents an abstracted and synthesized view of the types of factors that have been asserted as influencing project outcomes.
ieee international software metrics symposium | 1999
Andrew Gray; Stephen G. MacDonell; Martin J. Shepperd
Estimation of project development effort is most often performed by expert judgment rather than by using an empirically derived model (although such may be used by the expert to assist their decision). One question that can be asked about these estimates is how stable are they with respect to characteristics of the development process and product? This stability can be assessed in relation to the degree to which the project has advanced over time, the type of module for which the estimate is being made, and the characteristics of that module. In this paper we examine a set of expert-derived estimates for the effort required to develop a collection of modules from a large health-care system. Statistical tests are used to identify relationships between the type (screen or report) and characteristics of modules and the likelihood of the associated development effort being underestimated, approximately correct, or over-estimated. Distinct relationships are found that suggest that the estimation process being examined was not unbiased to such characteristics. This is a potentially useful finding in that it provides an opportunity for estimators to improve their prediction performance.
north american fuzzy information processing society | 1997
Andrew Gray; Stephen G. MacDonell
Software metrics are measurements of the software development process and product that can be used as variables (both dependent and independent) in models for project management. The most common types of these models are those used for predicting the development effort for a software system based on size, complexity, developer characteristics, and other metrics. Despite the financial benefits from developing accurate and usable models, there are a number of problems that have not been overcome using the traditional techniques of formal and linear regression models. These include the nonlinearities and interactions inherent in complex real-world development processes, the lack of stationarity in such processes, over-commitment to precisely specified values, the small quantities of data often available, and the inability to use whatever knowledge is available where exact numerical values are unknown. The use of alternative techniques, especially fuzzy logic, is investigated and some usage recommendations are made.
Project Management Journal | 2012
Laurie McLeod; Bill Doolin; Stephen G. MacDonell
Answering the call for alternative approaches to researching project management, we explore the evaluation of project success from a subjectivist perspective. An in-depth, longitudinal case study of information systems development in a large manufacturing company was used to investigate how various project stakeholders subjectively perceived the project outcome and what evaluation criteria they drew on in doing so. A conceptual framework is developed for understanding and analyzing evaluations of project success, both formal and informal. The framework highlights how different stakeholder perspectives influence the perceived outcome(s) of a project, and how project evaluations may differ between stakeholders and across time.
IEEE Transactions on Software Engineering | 2010
Stephen G. MacDonell; Martin J. Shepperd; Barbara A. Kitchenham; Emilia Mendes
BACKGROUND-The systematic review is becoming a more commonly employed research instrument in empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context. OBJECTIVE-The aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular, we wish to investigate the consistency of process and the stability of outcomes. METHOD-We compare the results of two independent reviews undertaken with a common research question. RESULTS-The two reviews find similar answers to the research question, although the means of arriving at those answers vary. CONCLUSIONS-In addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways. This provides evidence that, in this context at least, the systematic review is a robust research method.
ieee international software metrics symposium | 1997
Stephen G. MacDonell; Martin J. Shepperd; Philip J. Sallis
An important task for any software project manager is to be able to predict and control project size and development effort. Unfortunately, there is comparatively little work, other than function points, that tackles the problem of building prediction systems for software that is dominated by data considerations, in particular systems developed using 4GLs. We describe an empirical investigation of 70 such systems. Various easily obtainable counts were extracted from data models (e.g. number of entities) and from specifications (e.g. number of screens). Using simple regression analysis, a prediction system of implementation size with accuracy of MMRE=21% was constructed. This approach offers several advantages. First there tend to be fewer counting problems than with function points since the metrics we used were based upon simple counts. Second, the prediction systems were calibrated to specific local environments rather than being based upon industry weights. We believe this enhanced their accuracy. Our work shows that it is possible to develop simple and useful local prediction systems based upon metrics easily derived from functional specifications and data models, without recourse to overly complex metrics or analysis techniques. We conclude that this type of use of metrics can provide valuable support for the management and control of 4GL and database projects.
international conference on e-business and telecommunication networks | 2004
Georgia Frantzeskou; Stefanos Gritzalis; Stephen G. MacDonell
Cybercrime has increased in severity and frequency in the recent years and because of this, it has become a major concern for companies, universities and organizations. The anonymity offered by the Internet has made the task of tracing criminal identity difficult. One study field that has contributed in tracing criminals is authorship analysis on e-mails, messages and programs. This paper contains a study on source code authorship analysis. The aim of the research efforts in this area is to identify the author of a particular piece of code by examining its programming style characteristics. Borrowing extensively from the existing fields of linguistics and software metrics, this field attempts to investigate various aspects of computer program authorship. Source code authorship analysis could be implemented in cases of cyber attacks, plagiarism and computer fraud. In this paper we present the set of tools and techniques used to achieve the goal of authorship identification, a review of the research efforts in the area and a new taxonomy on source code authorship analysis.
Software Engineering Journal | 1994
Stephen G. MacDonell
Budgetary constraints are placing increasing pressure on project managers to effectively estimate development effort requirements at the earliest opportunity. With the rising impact of automation on commercial software development, the attention of researchers developing effort estimation models has recently been focused on functional representations of systems, in response to the assertion that development effort is a function of specification content. A number of such models exist; several, however, have received almost no research or industry attention. Project managers wishing to implement a functional assessment and estimation programme are therefore unlikely to be aware of the various methods or how they compare. This paper therefore provides this information, as well as forming a basis for the development and improvement of new methods. >
Empirical Software Engineering | 1999
Andrew Gray; Stephen G. MacDonell
Whilst some software measurement research has been unquestionably successful, other research has struggled to enable expected advances in project and process management. Contributing to this lack of advancement has been the incidence of inappropriate or non-optimal application of various model-building procedures. This obviously raises questions over the validity and reliability of any results obtained as well as the conclusions that may have been drawn regarding the appropriateness of the techniques in question. In this paper we investigate the influence of various data set characteristics and the purpose of analysis on the effectiveness of four model-building techniques—three statistical methods and one neural network method. In order to illustrate the impact of data set characteristics, three separate data sets, drawn from the literature, are used in this analysis. In terms of predictive accuracy, it is shown that no one modeling method is best in every case. Some consideration of the characteristics of data sets should therefore occur before analysis begins, so that the most appropriate modeling method is then used. Moreover, issues other than predictive accuracy may have a significant influence on the selection of model-building methods. These issues are also addressed here and a series of guidelines for selecting among and implementing these and other modeling techniques is discussed.