Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lesley Pickard is active.

Publication


Featured researches published by Lesley Pickard.


IEEE Transactions on Software Engineering | 2002

Preliminary guidelines for empirical research in software engineering

Barbara A. Kitchenham; Shari Lawrence Pfleeger; Lesley Pickard; Peter Jones; D.C. Hoaglin; K. El Emam; J. Rosenberg

Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers, and meta-analysts in designing, conducting, and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection, and analysis and reporting of empirical studies.


IEEE Software | 1995

Case studies for method and tool evaluation

Barbara A. Kitchenham; Lesley Pickard; Shari Lawrence Pfleeger

Case studies help industry evaluate the benefits of methods and tools and provide a cost-effective way to ensure that process changes provide the desired results. However, unlike formal experiments and surveys, case studies do not have a well-understood theoretical basis. This article provides guidelines for organizing and analyzing case studies so that they produce meaningful results. >


Information & Software Technology | 1998

Combining empirical results in software engineering

Lesley Pickard; Barbara A. Kitchenham; Peter Jones

Abstract In this paper we investigate the techniques used in medical research to combine results from independent empirical studies of a particular phenomenon: meta-analysis and vote-counting. We use an example to illustrate the benefits and limitations of each technique and to indicate the criteria that should be used to guide your choice of technique. Meta-analysis is appropriate for homogeneous studies when raw data or quantitative summary information, e.g. correlation coefficient, are available. It can also be used for heterogeneous studies where the cause of the heterogeneity is due to well-understood partitions in the subject population. In other circumstances, meta-analysis is usually invalid. Although intuitively appealing, vote-counting has a number of serious limitations and should usually be avoided. We suggest that combining study results is unlikely to solve all the problems encountered in empirical software engineering studies, but some of the infrastructure and controls used by medical researchers to improve the quality of their empirical studies would be useful in the field of software engineering.


Software Engineering Journal | 1987

Statistical techniques for modelling software quality in the ESPIRIT REQUEST project

Barbara Kithenham; Lesley Pickard

This paper discusses the type of statistical techniques that will be required to formulate, evaluate and use the proposed REQUEST constructive quality model (COQGAMO). Statistical techniques are evaluated in terms of the probable nature and mode of use of COQGAMO, the requirements for model and metric validation, and the problems associated with software metrics data. Examples of preliminary evaluation of the proposed statistical techniques are given using genuine software metrics data.


IEEE Transactions on Software Engineering | 2003

Modeling software bidding risks

Barbara A. Kitchenham; Lesley Pickard; Stephen G. Linkman; Peter Jones

We discuss a method of developing a software bidding model that allows users to visualize the uncertainty involved in pricing decisions and make appropriate bid/no bid decisions. We present a generic bidding model developed using the modeling method. The model elements were identified after a review of bidding research in software and other industries. We describe the method we developed to validate our model and report the main results of our model validation, including the results of applying the model to four bidding scenarios.


Information & Software Technology | 2005

A framework for evaluating a software bidding model

Barbara A. Kitchenham; Lesley Pickard; Stephen G. Linkman; Peter Jones

This paper discusses the issues involved in evaluating a software bidding model. We found it difficult to assess the appropriateness of any model evaluation activities without a baseline or standard against which to assess them. This paper describes our attempt to construct such a baseline. We reviewed evaluation criteria used to assess cost models and an evaluation framework that was intended to assess the quality of requirements models. We developed an extended evaluation framework and an associated evaluation process that will be used to evaluate our bidding model. Furthermore, we suggest the evaluation framework might be suitable for evaluating other models derived from expert-opinion based influence diagrams.


ACM Sigsoft Software Engineering Notes | 1998

Evaluating software engineering methods and tools: part 9: quantitative case study methodology

Barbara A. Kitchenham; Lesley Pickard

This article is the first of three articles describing how to undertake a quantitative case study based on work done as part of the DESMET project [1], [2]. In the context of methods and tool evaluations, case studies are a means of evaluating methods and tools as part of the normal software development activities undertaken by an organisation. The main benefit of such case studies is that they allow the effect of new methods and tools to be assessed in realistic situations. Thus, case studies provide a cost-effective means of ensuring that process changes provide the desired results. However, unlike formal experiments and surveys, case studies do not have a well-understood theoretical basis. This series of articles provides guidelines for organising and analysing case studies so that your investigations of new technologies will produce meaningful results.


ACM Sigsoft Software Engineering Notes | 1998

Evaluating software eng. methods and tools part 10: designing and running a quantitative case study

Barbara A. Kitchenham; Lesley Pickard

In the last article we considered how to identify the context for a case study and how to define and validate a case study hypothesis. In this article, we continue my discussion of the eight steps involved in a quantitative case study by considering the remaining six steps: selecting the host projects; identifying the method of comparison; minimising the effect of confounding factors, planning the case study, monitoring the case study, analysing the results.


IEEE Transactions on Software Engineering | 1999

Comments on: evaluating alternative software production functions

Lesley Pickard; Barbara A. Kitchenham; Peter Jones; Qing Hu

Software development projects are notorious for cost overruns and schedule delays. While dozens of software cost models have been proposed, few of them seem to have any degree of consistent accuracy. One major factor contributing to this persistent and widespread problem is an inadequate understanding of the real behavior of software development processes. We believe that software development could be studied as an economic production process and that established economic theories and methods could be used to develop and validate software production and cost models. We present the results of evaluating four alternative software production models using the P-test, a statistical procedure developed specifically for testing the truth of a hypothesis in the presence of alternatives in econometric studies. We found that the truth of the widely used Cobb-Douglas type of software production and cost models (e.g., COCOMO) cannot be maintained in the presence of quadratic or translog models. Overall, the quadratic software production function is shown to be the most plausible model for representing software production processes. Limitations of this study and future directions are also discussed.


ACM Sigsoft Software Engineering Notes | 1998

Evaluating software engineering methods and tools, part 11: analysing quantitative case studies

Barbara A. Kitchenham; Lesley Pickard

This article considers the issue of analysing and reporting case study results. It considers the problem of identifying an orgauisation profile for host project selection and the approaches you can take to analysing the three different case study designs: companybaseline designs, within-project component comparison designs, sister project designs. However, analysing and reporting the results of a case study is not the end of the evaluation exercise. The purpose of an evaluation exercise is to allow someone to make an informed decision about adoption of a new technology. Thus, you will need to ensure that not onlydo you analyse your data appropriately but that you present your conclusions in manner that will allow the recipients of your report to make their decisions effectively. So although this article is full of statistical terminology, remember to keep your report to your case study sponsor as straightforward and jargon-free as possible!

Collaboration


Dive into the Lesley Pickard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. El Emam

National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge