Christopher D. Buckingham
Aston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher D. Buckingham.
Medical Informatics and The Internet in Medicine | 2002
Christopher D. Buckingham
Effective clinical decision making depends upon identifying possible outcomes for a patient, selecting relevant cues, and processing the cues to arrive at accurate judgements of each outcomes probability of occurrence. These activities can be considered as classification tasks. This paper describes a new model of psychological classification that explains how people use cues to determine class or outcome likelihoods. It proposes that clinicians respond to conditional probabilities of outcomes given cues and that these probabilities compete with each other for influence on classification. The model explains why people appear to respond to base rates inappropriately, thereby overestimating the occurrence of rare categories, and a clinical example is provided for predicting suicide risk. The model makes an effective representation for expert clinical judgements and its psychological validity enables it to generate explanations in a form that is comprehensible to clinicians. It is a strong candidate for incorporation within a decision support system for mental-health risk assessment, where it can link with statistical and pattern recognition tools applied to a database of patients. The symbiotic combination of empirical evidence and clinical expertise can provide an important web-based resource for risk assessment, including multi-disciplinary education and training.
Medical Informatics and The Internet in Medicine | 2007
Christopher D. Buckingham; Abu Ahmed; Ann Adams
Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.
Supply Chain Management | 2015
Elisabeth Ilie-Zudor; Anikó Ekárt; Zsolt Kemény; Christopher D. Buckingham; Philip Welch; László Monostori
Purpose – The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution. Design/methodology/approach – The paper establishes an overview of challenges and opportunities of current significance in the area of big data, specifically in the context of transparency and processes in heterogeneous enterprise networks. Within this context, the paper presents how existing components and purpose-driven research were combined for a solution implemented in a nationwide network for less-than-truckload consignments. Findings – Aside from providing an extended overview of today’s big data situation, the findings have shown that technical means and methods available today can comprise a feasible process transparency solution in a large heterogeneous network where legacy practices, reporting lags and incomplete data exist, yet processes are sensitive to inadequate policy changes. Practical implications – The means introduced in the paper were found to be of utility value in improving process efficiency, transparency and planning in logistics networks. The particular system design choices in the presented solution allow an incremental introduction or evolution of resource handling practices, incorporating existing fragmentary, unstructured or tacit knowledge of experienced personnel into the theoretically founded overall concept. Originality/value – The paper extends previous high-level view on the potential of big data, and presents new applied research and development results in a logistics application.
international conference on ehealth, telemedicine, and social medicine | 2009
Naomi Wrighton; Christopher D. Buckingham
In the field of mental health risk assessment, there is no standardisation between the data used in different systems. As a first step towards the possible interchange of data between assessment tools, an ontology has been constructed for a particular one, GRiST (Galatean Risk Screening Tool). We briefly introduce GRiST and its data structures, then describe the ontology and the benefits that have already been realised from the construction process. For example, the ontology has been used to check the consistency of the various trees used in the model. We then consider potential uses in integration of data from other sources.
NICSO | 2014
Jan Chircop; Christopher D. Buckingham
Ant colony optimisation algorithms model the way ants use pheromones for marking paths to important locations in their environment. Pheromone traces are picked up, followed, and reinforced by other ants but also evaporate over time. Optimal paths attract more pheromone and less useful paths fade away. The main innovation of the proposed Multiple Pheromone Ant Clustering Algorithm (MPACA) is to mark objects using many pheromones, one for each value of each attribute describing the objects in multidimensional space. Every object has one or more ants assigned to each attribute value and the ants then try to find other objects with matching values, depositing pheromone traces that link them. Encounters between ants are used to determine when ants should combine their features to look for conjunctions and whether they should belong to the same colony. This paper explains the algorithm and explores its potential effectiveness for cluster analysis.
ieee region 10 conference | 2008
Rozlina Mohamed; Christopher D. Buckingham
In this paper, we present the construction of location-based schema caching for query routing at the client-peer in super-peer networks. This cached information is used for directly routing subsequent repeated queries towards their actual resource locations without going via the super peer. Instead of storing the previous query with its result, our proposed approach caches the query with the routing information, which contains the locations where the query data were previously found. This helps avoid outdated results. The paper describes the main processes, including how the input query is decomposed, rewritten, and cached at the client peer.
2010 Third International Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies and Services | 2010
Sherif E. Hegazy; Christopher D. Buckingham
This paper deals with a very important issue in any knowledge engineering discipline: the accurate representation and modelling of real life data and its processing by human experts. The work is applied to the GRiST Mental Health Risk Screening Tool for assessing risks associated with mental-health problems. The complexity of risk data and the wide variations in clinicians’ expert opinions make it difficult to elicit representations of uncertainty that are an accurate and meaningful consensus. It requires integrating each expert’s estimation of a continuous distribution of uncertainty across a range of values. This paper describes an algorithm that generates a consensual distribution at the same time as measuring the consistency of inputs. Hence it provides a measure of the confidence in the particular data item’s risk contribution at the input stage and can help give an indication of the quality of subsequent risk predictions.
international conference on ehealth, telemedicine, and social medicine | 2009
S.E. Hegazy; Christopher D. Buckingham
This paper proposes an incremental approach to solving node weightings in a tree structure. The tree represents expertise used to quantify risks associated with mental health problems and it is incorporated within a web based decision support system called GRiST. The aim of the algorithm is to find the set of relative node weightings in the tree that helps GRiST simulate the clinical risk judgements given by mental health experts. This paper extends the solution presented in our earlier ARRIVE algorithm, to incorporate a larger pool of data and previous cases in the solution, hence producing better elicitation results. It is also useful in incorporating new cases into the GRiST tree parameters estimation process, one by one as they are encountered. The original ARRIVE algorithm showed that a very large number of nodes (several thousand for GRiST) can have their weights calculated from the clinical judgements associated with a few hundred cases (about 200 for GRiST). The new algorithm, iARRiVE, allows GRiST to learn by updating the node weightings to account for new cases. The results show that it can provide the best fit to an unlimited set of cases and thus ensure GRiST parameters provide the optimal solution for all the cases in its memory. Its solution can be applied to similar knowledge engineering domains where relative weightings of node siblings are part of the parameter space
international symposium on information technology | 2008
Rozlina Mohamed; Christopher D. Buckingham
Super-peer P2P systems strike a balance between searching efficiency in centralized P2P systems and the autonomy, load balancing and robustness provided by pure P2P systems. Super-peer is a node in the super-peer P2P system that maintains the central index for the information shared by a set of peers within the same cluster. The central index handles the searching request on behalf of the connecting set of peers and also passes on the request to neighboring super-peers in order to access additional indices and peers. In this paper, we study the behavior of query answering in super-peer P2P systems with the aim of understanding the issues and tradeoffs in designing a scalable super-peer system. We focus on where to post queries in order to retrieve the result and investigate the implications for four different architectures: caching queries and caching query results at the super-peer; caching the data location of previous queries; and an ordinary P2P system without any caching facilities. The paper discusses the tradeoffs parameters between architectures with respect to caching, highlights the effect of key parameter on system performance.
intelligent agents | 2016
Ali Rezaei-Yazdi; Christopher D. Buckingham
The success of intelligent agents in clinical care depends on the degree to which they represent and work with human decision makers. This is particularly important in the domain of clinical risk assessment where such agents either conduct the task of risk evaluation or support human clinicians with the task. This paper provides insights into how to understand and capture the cognitive processes used by clinicians when collecting the most important data about a person’s risks. It attempts to create some theoretical foundations for developing clinically justifiable and reliable decision support systems for initial risk screening. The idea is to direct an assessor to the most informative next question depending on what has already been asked using a mixture of probabilities and heuristics. The method was tested on anonymous mental health data collected by the GRiST risk and safety tool (www.egrist.org).