Stephen G. Linkman
Keele University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen G. Linkman.
IEEE Transactions on Software Engineering | 2001
Barbara A. Kitchenham; Robert T. Hughes; Stephen G. Linkman
This paper proposes a method for specifying models of software data sets in order to capture the definitions and relationships among software measures. We believe a method of defining software data sets is necessary to ensure that software data are trustworthy. Software companies introducing a measurement program need to establish procedures to collect and store trustworthy measurement data. Without appropriate definitions it is difficult to ensure data values are repeatable and comparable. Software metrics researchers need to maintain collections of software data sets. Such collections allow researchers to assess the generality of software engineering phenomena. Without appropriate safeguards, it is difficult to ensure that data from different sources are analyzed correctly. These issues imply the need for a standard method of specifying software data sets so they are fully documented and can be exchanged with confidence. We suggest our method of defining data sets can be used as such a standard. We present our proposed method in terms of a conceptual entity-relationship data model that allows complex software data sets to be modeled and their data values stored. The standard can, therefore, contribute both to the definition of a company measurement program and to the exchange of data sets among researchers.
IEEE Software | 1997
Barbara A. Kitchenham; Stephen G. Linkman
The authors discuss the sources of uncertainty and risk, their implications for software organizations, and how risk and uncertainty can be managed. Specifically, they assert that uncertainty and risk cannot be managed effectively at the individual project level. These factors must be considered in an organizational context.
empirical software engineering and measurement | 2007
John Bailey; David Budgen; Mark Turner; Barbara A. Kitchenham; Pearl Brereton; Stephen G. Linkman
Case study is an important research methodology for software engineering. We have identified the need for checklists supporting researchers and reviewers in conducting and reviewing case studies. We derived checklists for researchers and reviewers respectively, using systematic qualitative procedures. Based on nine sources on case studies, checklists are derived and validated, and hereby presented for further use and improvement.There is little empirical knowledge of the effectiveness of the object-oriented paradigm. To conduct a systematic review of the literature describing empirical studies of this paradigm. We undertook a Mapping Study of the literature. 138 papers have been identified and classified by topic, form of study involved, and source. The majority of empirical studies of OO concentrate on metrics, relatively few consider effectiveness.
Empirical Software Engineering | 2008
David Budgen; Barbara A. Kitchenham; Stuart M. Charters; Mark Turner; Pearl Brereton; Stephen G. Linkman
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p < 0.001) and the clarity score by 2.98 (SE 0.23, p < 0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.
Software Quality Journal | 1997
Barbara A. Kitchenham; Stephen G. Linkman; Alberto Pasquini; Vincenzo Nanni
This paper describes an attempt to use the approach developed by the SQUIDproject, which was part of the ESPRIT 3 programme, to define the software quality requirements of the Telescience project. The SQUID project developed its approach to quality modelling in parallel with ongoing feedback from testing that approach on the Telescience project, which was both large and software intensive. As part of this exercise we used the ISO software quality standard ISO 9126. It was an assessment of this and other existing quality models that caused us to re-assess what was meant by a quality model and led to a decomposition of existing ‘quality models’ into a composite model reflecting the different aspects of the model and its mapping onto a specific project or product. We break existing quality models into components which reflect the structure and content of the model. This composite model must then be customized for an individual product/project, we call this customized model a ‘Product Quality Model’. Application of this approach to the Telescience project identified a number of practical problems that the SQUID project needed to address. It also indicated a number of problems inherent in the current version of ISO 9126.
IEEE Transactions on Software Engineering | 2003
Barbara A. Kitchenham; Lesley Pickard; Stephen G. Linkman; Peter Jones
We discuss a method of developing a software bidding model that allows users to visualize the uncertainty involved in pricing decisions and make appropriate bid/no bid decisions. We present a generic bidding model developed using the modeling method. The model elements were identified after a review of bidding research in software and other industries. We describe the method we developed to validate our model and report the main results of our model validation, including the results of applying the model to four bidding scenarios.
empirical software engineering and measurement | 2009
Barbara A. Kitchenham; Pearl Brereton; Mark Turner; Mahmood Niazi; Stephen G. Linkman; Rialette Pretorius; David Budgen
This study aims to compare the use of targeted manual searches with broad automated searches, and to assess the importance of grey literature and breadth of search on the outcomes of SLRs. We used a participant-observer multi-case embedded case study. Our two cases were a tertiary study of systematic literature reviews published between January 2004 and June 2007 based on a manual search of selected journals and conferences and a replication of that study based on a broad automated search. Broad searches find more papers than restricted searches, but the papers may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers; if publication bias is not an issue; or if they are assessing research trends in research methodologies.
Empirical Software Engineering | 2010
Barbara A. Kitchenham; Pearl Brereton; Mark Turner; Mahmood Niazi; Stephen G. Linkman; Rialette Pretorius; David Budgen
Systematic literature reviews (SLRs) are a major tool for supporting evidence-based software engineering. Adapting the procedures involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared a tertiary study of systematic literature reviews published between January 1, 2004 and June 30, 2007 which used a manual search of selected journals and conferences and a replication of that study based on a broad automated search. We found that broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent evaluators incorporating at least two rounds of discussion.
Information & Software Technology | 1997
Stephen G. Linkman; H.D. Rombach
This paper presents a short report on the invited lecture given by Dr. H.D. Rombach at the EASE-97 conference. In this lecture Dr. Rombach described his view of the importance of experimentation to the introduction of new techniques and methods. He demonstrated how a series of experimental activities were used to take a reading technique (i.e. reading by stepwise abstraction) from initial conception into widespread use. In providing this report, I will use a selection of the information from the slides presented by Dr. Rombach, linked by my own summaries. As such any misrepresentations of Dr. Rombachs views are my mistakes.
Software Engineering Journal | 1994
Barbara A. Kitchenham; Stephen G. Linkman; D. T. Law
The paper discusses several empirical studies reported in the literature aimed at evaluating the benefits of using software engineering methods and tools. The discussion highlights a number of problems associated with the methodology of the studies. The main problems concerned the difficulty of formulating the hypothesis to be tested, using surrogate measures, defining a control and minimising the effect of personalities. Most of these problems are found in many experimental situations, but the problem associated with the proper definition of a control group seems to be a particular issue for software experiments. The paper concludes with some guidelines for improving the organisation of empirical studies.