Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara A. Kitchenham is active.

Publication


Featured researches published by Barbara A. Kitchenham.


IEEE Transactions on Software Engineering | 2002

Preliminary guidelines for empirical research in software engineering

Barbara A. Kitchenham; Shari Lawrence Pfleeger; Lesley Pickard; Peter Jones; D.C. Hoaglin; K. El Emam; J. Rosenberg

Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers, and meta-analysts in designing, conducting, and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection, and analysis and reporting of empirical studies.


IEEE Transactions on Software Engineering | 1995

Towards a framework for software measurement validation

Barbara A. Kitchenham; Shari Lawrence Pfleeger; Norman E. Fenton

In this paper we propose a framework for validating software measurement. We start by defining a measurement structure model that identifies the elementary component of measures and the measurement process, and then consider five other models involved in measurement: unit definition models, instrumentation models, attribute relationship models, measurement protocols and entity population models. We consider a number of measures from the viewpoint of our measurement validation framework and identify a number of shortcomings; in particular we identify a number of problems with the construction of function points. We also compare our view of measurement validation with ideas presented by other researchers and identify a number of areas of disagreement. Finally, we suggest several rules that practitioners and researchers can use to avoid measurement problems, including the use of measurement vectors rather than artificially contrived scalars.


international conference on software engineering | 2004

Evidence-based software engineering

Barbara A. Kitchenham; Tore Dybå; Magne Jørgensen

Our objective is to describe how software engineering might benefit from an evidence-based approach and to identify the potential difficulties associated with the approach. We compared the organisation and technical infrastructure supporting evidence-based medicine (EBM) with the situation in software engineering. We considered the impact that factors peculiar to software engineering (i.e. the skill factor and the lifecycle factor) would have on our ability to practice evidence-based software engineering (EBSE). EBSE promises a number of benefits by encouraging integration of research results with a view to supporting the needs of many different stakeholder groups. However, we do not currently have the infrastructure needed for widespread adoption of EBSE. The skill factor means software engineering experiments are vulnerable to subject and experimenter bias. The lifecycle factor means it is difficult to determine how technologies will behave once deployed. Software engineering would benefit from adopting what it can of the evidence approach provided that it deals with the specific problems that arise from the nature of software engineering.


IEEE Software | 1995

Case studies for method and tool evaluation

Barbara A. Kitchenham; Lesley Pickard; Shari Lawrence Pfleeger

Case studies help industry evaluate the benefits of methods and tools and provide a cost-effective way to ensure that process changes provide the desired results. However, unlike formal experiments and surveys, case studies do not have a well-understood theoretical basis. This article provides guidelines for organizing and analyzing case studies so that they produce meaningful results. >


IEEE Software | 1996

Software quality: the elusive target [special issues section]

Barbara A. Kitchenham; Shari Lawrence Pfleeger

If you are a software developer, manager, or maintainer, quality is often on your mind. But what do you really mean by software quality? Is your definition adequate? Is the software you produce better or worse than you would like it to be? We put software quality on trial, examining both the definition and evaluation of our software products and processes.


IEEE Transactions on Software Engineering | 2003

A simulation study of the model evaluation criterion MMRE

Tron Foss; Erik Stensrud; Barbara A. Kitchenham; Ingunn Myrtveit

The mean magnitude of relative error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that use MMRE as a basis of model comparison. We therefore recommend not using MMRE to evaluate and compare prediction models. At present, we do not have any universal replacement for MMRE. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.


international conference on software engineering | 1996

Effort estimation using analogy

Martin J. Shepperd; Chris Schofield; Barbara A. Kitchenham

The staff resources or effort required for a software project are notoriously difficult to estimate in advance. To date most work has focused upon algorithmic cost models such as COCOMO and Function Points. These can suffer from the disadvantage of the need to calibrate the model to each individual measurement environment coupled with very variable accuracy levels even after calibration. An alternative approach is to use analogy for estimation. We demonstrate that this method has considerable promise in that we show it to out perform traditional algorithmic methods for six different datasets. A disadvantage of estimation by analogy is that it requires a considerable amount of computation. The paper describes an automated environment known as ANGEL that supports the collection, storage and identification of the most analogous projects in order to estimate the effort for a new project. ANGEL is based upon the minimisation of Euclidean distance in n-dimensional space. The software is flexible and can deal with differing datasets both in terms of the number of observations (projects) and in the variables collected. Our analogy approach is evaluated with six distinct datasets drawn from a range of different environments and is found to outperform other methods. It is widely accepted that effective software effort estimation demands more than one technique. We have shown that estimating by analogy is a candidate technique and that with the aid of an automated environment is an eminently practical technique.


IEEE Transactions on Software Engineering | 2007

Cross versus Within-Company Cost Estimation Studies: A Systematic Review

Barbara A. Kitchenham; Emilia Mendes; Guilherme Horta Travassos

The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e.,


Journal of Software Maintenance and Evolution: Research and Practice | 1999

Towards an Ontology of software maintenance

Barbara A. Kitchenham; Guilherme Horta Travassos; Anneliese von Mayrhauser; Frank Niessink; Norman F. Schneidewind; Janice Singer; Shingo Takada; Risto Vehvilainen; Hongji Yang

20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3.


Information & Software Technology | 2013

A systematic review of systematic review process research in software engineering

Barbara A. Kitchenham; Pearl Brereton

SUMMARY We suggest that empirical studies of maintenance are difficult to understand unless the context of the study is fully defined. We developed a preliminary ontology to identify a number of factors that influence maintenance. The purpose of the ontology is to identify factors that would affect the results of empirical studies. We present the ontology in the form of a UML model. Using the maintenance factors included in the ontology, we define two common maintenance scenarios and consider the industrial issues associated with them. Copyright

Collaboration


Dive into the Barbara A. Kitchenham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Ross Jeffery

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge