Kevin McDaid
Dundalk Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin McDaid.
The Statistician | 2001
Kevin McDaid; Simon P. Wilson
Testing software before its release is an important stage of the software testing process. We propose a decision theoretic solution to the problem of deciding the optimal length of the testing period. We make use of a well-known error detection model and a sensible utility. Three testing plans are described: the single stage, the two stage and the next fixed time look ahead. A study to compare the performance of the plans shows the relative performance of each plan under a variety of assumptions about the quality of the software to be tested. All the plans are illustrated by using the well-known naval tactical data system data set.
global software development for the practitioner | 2006
Philip S. Taylor; Des Greer; Paul Sage; Gerry Coleman; Kevin McDaid; Frank Keenan
Agile software development has steadily gained momentum and acceptability as a viable approach to software development. As software development continues to take advantage of the global market, agile methods are also being attempted in geographically distributed settings. In this paper, the authors discuss the usefulness of published research on agile global software development for the practitioner. It is contended that such published work is of minimal value to the practitioner and does not add anything to the guidance available before the existence of current agile methods. A survey of agile GSD related publications, from XP/Agile conferences between 2001 and 2005, is used to support this claim. The paper ends with a number of proposals which aim to improve the usefulness of future agile GSD research and experience.
engineering of computer-based systems | 2008
Kevin Logue; Kevin McDaid
Release planning is a critical activity in the software development process. The creation of a clear and realistic plan is extremely difficult, as key factors such as time and cost to develop chosen functionality and the likely return are subject to a high level of uncertainty. As such the management decision as to which stories to develop and which to ignore can be an extremely difficult one requiring an expert balancing of competing benefits and risks. This paper proposes a relatively simple statistical methodology that allows for the uncertainty in story size, value and project velocity. In so doing it provides key stakeholders with the opportunity to manage uncertainty in the planning of future releases. The technique is lightweight in nature and consistent with existing agile planning practices.
software engineering and advanced applications | 2012
Gilbert Regan; Fergal McCaffery; Kevin McDaid; Derek Flood
Traceability of software artifacts, from requirements to design and through implementation and quality assurance, has long been promoted by the research and expert practitioner communities. However, evidence indicates that, with the exception of those operating in the safety critical domain, few software companies choose to implement traceability processes, often due to associated cost and complexity issues. This paper presents a review of traceability literature including the implementation of traceability in real organizations. Through both analyzing case studies and research published by leading traceability researchers, this paper synthesizes the barriers faced by organizations while implementing traceability, along with proposed solutions to the barriers. Additionally, given the importance of traceability in the regulated domain of safety critical software, the paper compares the barriers for organizations operating inside and outside of this domain.
international conference on software engineering | 2009
Leslie Bradley; Kevin McDaid
Spreadsheets are ubiquitous with evidence that Microsoft Excel, the leading application in the area, has an install base of 90% on end-user desktops. Nowhere is the usage of spreadsheets more extensive or more critical than in the financial sector, where regulations such as the Sarbanes-Oxley Act of 2002 has placed added pressure on organisations to ensure spreadsheets are error-free. This paper outlines the research that has been carried out into the use of Bayesian Statistical methods to estimate the level of error in large spreadsheets based on expert knowledge and spreadsheet test data. The estimate can aid in the decision to accept or re-examine a large time consuming spreadsheet.
agile conference | 2008
Kevin Logue; Kevin McDaid
In the creation of a release plan developers must attempt to maximise business value while maintaining a high degree of certainty that the product will be completed on time and to budget. As a result of these constraints management is often forced to make the difficult decision as to which stories to complete and which to ignore. The difficulty of this decision is compounded by a high degree of uncertainty within the business value of each story, the story size and also the resources available. This paper proposes a relatively simple statistical methodology that allows for uncertainty in these areas. In so doing it provides key stakeholders with the opportunity to manage uncertainty in the planning of what functionality to include in upcoming releases. The technique is lightweight in nature and consistent with existing agile planning practices. A case study is provided to demonstrate how the method may be used.
Proceedings of the 4th international workshop on End-user software engineering | 2008
Kevin McDaid; Alan Rust; Brian Bishop
It is widely documented that the absence of a structured approach to spreadsheet engineering is a key factor in the high level of spreadsheet errors. In this paper we propose and investigate the application of Test-Driven Development to the creation of spreadsheets. Test-Driven Development is an emerging development technique in software engineering that has been shown to result in better quality software code. It has also been shown that this code requires less testing and is easier to maintain. Through a set of case studies we demonstrate that Test-Driven Development can be applied to the development of spreadsheets. We present the detail of these studies preceded by a clear explanation of the technique and its application to spreadsheet engineering. A supporting tool under development by the authors is also documented along with proposed research to determine the effectiveness of the methodology and the associated tool.
Proceedings of the 4th international workshop on End-user software engineering | 2008
Brian Bishop; Kevin McDaid
In recent years the reliability of end-user developed spreadsheet programs has been shown to be very poor. Surprisingly, relatively little research has been carried out in the areas of spreadsheet testing and debugging. With the aim of recording and analysing end-user behaviour and performance in spreadsheet error detection and correction, an experiment was conducted with 13 industry-based professionals and 34 accounting & finance students. The work utilised a novel approach for acquiring experimental data through the unobtrusive recording of participants debugging actions using a custom built VBA tool. Based on the findings from analysis of debugging behaviour, a simple debugging tool was developed by the authors, and its effects on debugging performance were investigated by means of a controlled experiment.
international conference on software process improvement and capability determination | 2014
Gilbert Regan; Fergal McCaffery; Kevin McDaid; Derek Flood
Regulation normally requires critical systems to be certified before entering service. This involves submission of a safety case - a reasoned argument and supporting evidence that stringent requirements have been met and that the system is acceptably safe. A good safety case encompasses an effective risk mitigation process which is highly dependent on requirements traceability. However despite its many benefits and regulatory requirements, most existing software systems lack explicit traceability links between artefacts. Reasons for the lack of traceability include cost, complexity and lack of guidance on how to implement traceability.To assist medical device organisations in addressing the lack of guidance on how to implement effective traceability, this paper aims to present the development and validation of a traceability process assessment model and the actions to be taken as a result of the validation. The process assessment model will allow organisations to identify strengths and weaknesses in their existing traceability process and pinpoint areas for improvement.
international conference on software process improvement and capability determination | 2013
Gilbert Regan; Fergal McCaffery; Kevin McDaid; Derek Flood
Requirements traceability helps to ensure software quality. It supports quality assurance activities such as impact analysis, regression test selection, compliance verification and validation of requirements. Its implementation has long been promoted by the research and expert practitioner communities. However, evidence indicates that few software organizations choose to implement traceability processes, in the most part due to cost and complexity issues. Organizations operating within the safety critical domains are mandated to implement traceability, and find the implementation and maintenance of an efficient and compliant traceability process a difficult and complex issue. Through interviews with a medical device SME, this paper seeks to determine how traceability is implemented within the organization, the difficulties it faces in implementing traceability, how compliant it is with the medical device standards and guidelines, and what changes could be made to improve the efficiency of their traceability implementation and maintenance.