Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Woodall is active.

Publication


Featured researches published by Philip Woodall.


Journal of Systems and Software | 2005

An investigation of software engineering curricula

Barbara A. Kitchenham; David Budgen; Pearl Brereton; Philip Woodall

We adapted a survey instrument developed by Timothy Lethbridge to assess the extent to which the education delivered by four UK universities matches the requirements of the software industry. We propose a survey methodology that we believe addresses the research question more appropriately than the one used by Lethbridge. In particular, we suggest that restricting the scope of the survey to address the question of whether the curricula for a specific university addressed the needs of its own students, allowed us to identify an appropriate target population. However, our own survey suffered from several problems. In particular the questions used in the survey are not ideal, and the response rate was poor.Although the poor response rate reduces the value of our results, our survey appears to confirm several of Lethbridges observations with respect to the over-emphasis of mathematical topics and the under-emphasis on business topics. We also have a close agreement with respect to the relative importance of different software engineering topics. However the set of topics, that we found were taught far less than their importance would suggest, were quite different from the topics identified by Lethbridge.


Information & Management | 2013

Data quality assessment: The Hybrid Approach

Philip Woodall; Alexander Borek; Ajith Kumar Parlikad

Various techniques have been proposed to enable organisations to assess the current quality level of their data. Unfortunately, organisations have many different requirements related to data quality (DQ) assessment. For example, some organisations may need to focus on ensuring regulations are met rather than reducing costs. Due to this, organisations may be forced to follow an assessment technique, which may not wholly fit their needs and current situation. Therefore, we propose and evaluate the Hybrid Approach to assessing DQ, which demonstrates how to dynamically configure an assessment technique as needed while leveraging the best practices from existing assessment techniques.


Computers in Industry | 2014

A risk based model for quantifying the impact of information quality

Alexander Borek; Ajith Kumar Parlikad; Philip Woodall; Maurizio Tomasella

Information quality is one of the key determinants of information system success. When information quality is poor, it can cause a variety of risks in an organization. To manage resources for information quality improvement effectively, it is necessary to understand where, how, and how much information quality impacts an organizations ability to successfully deliver its objectives. So far, existing approaches have mostly focused on the measurement of information quality but not adequately on the impact that information quality causes. This paper presents a model to quantify the business impact that arises through poor information quality in an organization by using a risk based approach. It hence addresses the inherent uncertainty in the relationship between information quality and organizational impact. The model can help information managers to obtain quantitative figures which can be used to build reliable and convincing business cases for information quality improvement.


International Journal of Information Quality | 2014

A classification of data quality assessment and improvement methods

Philip Woodall; Martin Oberhofer; Alexander Borek

Data quality (DQ) assessment and improvement in larger information systems would often not be feasible without using suitable ‘DQ methods’, which are algorithms that can be automatically executed by computer systems to detect and/or correct problems in datasets. Currently, these methods are already essential, and they will be of even greater importance as the quantity of data in organisational systems grows. This paper provides a review of existing methods for both DQ assessment and improvement and classifies them according to the DQ problem and problem context. Six gaps have been identified in the classification, where no current DQ methods exist, and these show where new methods are required as a guide for future research and DQ tool development.


Archive | 2015

Classifying Data Quality Problems in Asset Management

Philip Woodall; Jing Gao; Ajith Kumar Parlikad; Andy Koronios

Making sound asset management decisions, such as whether to replace or maintain an ageing underground water pipe, are critical to ensure that organisations maximise the performance of their assets. These decisions are only as good as the data that supports them, and hence many asset management organisations are in desperate need to improve the quality of their data. This chapter reviews the key academic research on data quality (DQ) and Information Quality (IQ) (used interchangeably in this chapter) in asset management, combines this with the current DQ problems faced by asset management organisations in various business sectors, and presents a classification of the most important DQ problems that need to be tackled by asset management organisations. In this research, eleven semi-structured interviews were carried out with asset management professionals in a range of business sectors in the UK. The problems described in the academic literature were cross checked against the problems found in industry. In order to support asset management professionals in solving these problems, we categorised them into seven different DQ dimensions, used in the academic literature, so that it is clear how these problems fit within the standard frameworks for assessing and improving data quality. Asset management professionals can therefore now use these frameworks to underpin their DQ improvement initiatives while focussing on the most critical DQ problems.


Archive | 2012

Benchmarking Information Quality Performance in Asset Intensive Organisations in the UK

Philip Woodall; Ajith Kumar Parlikad; Lucas Lebrun

Maintaining good quality information is a difficult task and many leading asset management (AM) organisations have difficulty planning and executing successful information quality management (IQM) practices. The aim of this work is, therefore, to provide guidance on how organisations can improve IQM practices within the AM unit of the business. Using the case study methodology, the current level of IQM maturity was benchmarked for ten AM organisations in the UK by focussing on the AM unit of the organisation. By understanding how the most mature organisations approach the task of IQM, specific guidelines for how organisations with lower maturity levels can improve their IQM practices are presented. Five ‘critical success factors’ from the IQM-CMM maturity model were identified as being significant for improving IQM maturity: IQ management team and project management, IQ requirements analysis, IQ requirements management, information product visualisation and meta-information management.


Journal of Data and Information Quality | 2017

The Data Repurposing Challenge: New Pressures from Data Analytics

Philip Woodall

When data is collected for the first time, the data collector has in mind the data quality requirements that must be satisfied before it can be used successfully—that is, the data collector ensures “fitness for use”—the commonly agreed upon definition of data quality [Wang and Strong 1996]. However, data that is repurposed [Woodall and Wainman 2015], as opposed to reused, must be managed with multiple different fitness for use requirements in mind, which complicates any data quality enhancements [Ballou and Pazer 1985]. While other work has considered context in relation to data quality requirements, including the need to meet multiple fitness for use requirements [Watts et al. 2009; Bertossi et al. 2011], in the current fast-paced environment of data repurposing for analytics and business intelligence, there are new challenges for dealing with multiple fitness for use requirements in the context of:


Service Orientation in Holonic and Multi-agent Manufacturing | 2015

Evaluating the Applicability of Multi-agent Software for Implementing Distributed Industrial Data Management Approaches

Torben Jess; Philip Woodall; Duncan McFarlane

Distributed approaches to industrial control or information management problems are often tackled using Multi-agent methods. Multi-Agent systems—solutions resulting from taking a Multi-agent based approaches—often come with a certain amount of “overhead” such as communication systems, but can provide a helpful tool with the design and implementation. In this paper, a distributed data management problem is addressed with both a bespoke approach developed specifically for this problem and a more general Multi-agent approach. The two approaches are compared using architecture and software metrics. The software metric results show similar results, although overall the bespoke approach was more appropriate for the particular application examined. The architectural analysis indicates that the main reason for this difference is the communication and computation overhead associated with the agent-based system. It was not within the scope of this study to compare the two approaches under multiple application scenarios.


international conference on industrial informatics | 2014

A framework for detecting unnecessary industrial data in ETL processes

Philip Woodall; Torben Jess; Mark Harrison; Duncan McFarlane; Amar Shah; William E. Krechel; Eric Nicks

Extract transform and load (ETL) is a critical process used by industrial organisations to shift data from one database to another, such as from an operational system to a data warehouse. With the increasing amount of data stored by industrial organisations, some ETL processes can take in excess of 12 hours to complete; this can leave decision makers stranded while they wait for the data needed to support their decisions. After designing the ETL processes, inevitably data requirements can change, and much of the data that goes through the ETL process may not ever be used or needed. This paper therefore proposes a framework for dynamically detecting and predicting unnecessary data and preventing it from slowing down ETL processes - either by removing it entirely or deprioritizing it. Other advantages of the framework include being able to prioritise data cleansing tasks and determining what data should be processed first and placed into fast access memory. We show existing example algorithms that can be used for each component of the framework, and present some initial testing results as part of our research to determine whether the framework can help to reduce ETL time.


Total Information Risk Management#R##N#Maximizing the Value of Data and Information Assets | 2014

How Data and Information Create Risk

Alexander Borek; Ajith Kumar Parlikad; Jela Webb; Philip Woodall

This chapter presents a model on how data and information create risk in an organization. It is explained how poor information management leads to data and information quality problems, which then lead to lowered business process performance and eventually create risks in organizations and affect core business objectives.

Collaboration


Dive into the Philip Woodall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jela Webb

University of Brighton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenrong Lu

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Jing Gao

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Torben Jess

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge