Gada F. Kadoda
Bournemouth University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gada F. Kadoda.
Journal of Systems and Software | 2000
Carolyn Mair; Gada F. Kadoda; Martin Lefley; Keith Phalp; Chris Schofield; Martin J. Shepperd; Steve Webster
Traditionally, researchers have used either o�f-the-shelf models such as COCOMO, or developed local models using statistical techniques such as stepwise regression, to obtain software eff�ort estimates. More recently, attention has turned to a variety of machine learning methods such as artifcial neural networks (ANNs), case-based reasoning (CBR) and rule induction (RI). This paper outlines some comparative research into the use of these three machine learning methods to build software e�ort prediction systems. We briefly describe each method and then apply the techniques to a dataset of 81 software projects derived from a Canadian software house in the late 1980s. We compare the prediction systems in terms of three factors: accuracy, explanatory value and configurability. We show that ANN methods have superior accuracy and that RI methods are least accurate. However, this view is somewhat counteracted by problems with explanatory value and configurability. For example, we found that considerable eff�ort was required to configure the ANN and that this compared very unfavourably with the other techniques, particularly CBR and least squares regression (LSR). We suggest that further work be carried out, both to further explore interaction between the enduser and the prediction system, and also to facilitate configuration, particularly of ANNs.
Lecture Notes in Computer Science | 2001
Alan F. Blackwell; Carol Britton; Anna L. Cox; Thomas R. G. Green; Corin A. Gurr; Gada F. Kadoda; Maria Kutar; Martin J. Loomes; Chrystopher L. Nehaniv; Marian Petre; Chris Roast; Chris P. Roe; Allan Wong; Richard M. Young
The Cognitive Dimensions of Notations framework has been created to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs. The framework emphasizes the design choices available to such designers, including characterization of the users activity, and the inevitable tradeoffs that will occur between potential design options. The resuliing framework has been under development for over 10 years, and now has an active community of researchers devoted to it. This paper first introduces Cognitive Dimensions. It then summarizes the current activity, especially the results of a one-day workshop devoted to Cognitive Dimensions in December 2000, and reviews the ways in which it applies to the field of Cognitive Technology.
Empirical Software Engineering | 2000
Martin J. Shepperd; Michelle Cartwright; Gada F. Kadoda
Building and evaluating predictionsystems is an important activity for software engineering researchers.Increasing numbers of techniques and datasets are now being madeavailable. Unfortunately systematic comparison is hindered bythe use of different accuracy indicators and evaluation processes.We argue that these indicators are statistics that describe propertiesof the estimation errors or residuals and that the sensible choiceof indicator is largely governed by the goals of the estimator.For this reason it may be helpful for researchers to providea range of indicators. We also argue that it is useful to formallytest for significant differences between competing predictionsystems and note that where only a few cases are available thiscan be problematic, in other words the research instrument mayhave insufficient power. We demonstrate that this is the casefor a well known empirical study of cost models. Simulation,however, could be one means of overcoming this difficulty.
international conference on case based reasoning | 2001
Gada F. Kadoda; Michelle Cartwright; Martin J. Shepperd
This paper explores some of the practical issues associated with the use of case-based reasoning (CBR) or estimation by analogy for software project effort prediction. Different research teams have reported varying experiences with this technology. We take the view that the problems hindering the effective use of CBR technology are twofold. First, the underlying characteristics of the datasets play a major role in determining which prediction technique is likely to be most effective. Second, when CBR is that technique, we find that configuring a CBR system can also have a significant impact upon predictive capabilities. In this paper we examine the performance of CBR when applied to various datasets using stepwise regression (SWR) as a benchmark. We also explore the impact of the choice of number of analogies and the size of the training dataset when making predictions.
IEEE Transactions on Software Engineering | 2001
Martin J. Shepperd; Gada F. Kadoda
Archive | 2000
Gada F. Kadoda; Michelle Cartwright; Li-Guang Chen; Martin J. Shepperd
ieee international software metrics symposium | 2001
Martin J. Shepperd; Gada F. Kadoda
Proceedings Seventh International Software Metrics Symposium | 2001
Martin J. Shepperd; Gada F. Kadoda
PPIG | 2000
Gada F. Kadoda
Archive | 2000
Martin J. Shepperd; Gada F. Kadoda