Paul Compton
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Compton.
Knowledge Acquisition | 1990
Paul Compton; R. Jansen
Knowledge acquisition for expert systems is a purely practical problem to be solved by experiment, independent of philosophy. However the experiments one chooses to conduct will be influenced by ones implicit or explicit philosophy of knowledge, particularly if this philosophy is taken as axiomatic rather than as an hypothesis. We argue that practical experience of knowledge engineering, particularly in the long term maintenance of expert systems, suggests that knowledge does not necessarily have a rigorous structure built up from primitive concepts and their relationships. The knowledge engineer finds that the experts knowledge is not so much recalled, but to a greater or lesser degree “made up” by the expert as the occasion demands. The knowledge the expert provides varies with the context and gets its validity from its ability to explain data and justify the experts judgement in the context . We argue that the physical symbol hypothesis with its implication that some underlying knowledge structure can be found is a misleading philosophical underpinning for knowledge acquisition and representation. We suggest that the “insight” hypothesis of Lonergan (1958) better explains the flexibility and relativity of knowledge that the knowledge engineer experiences and may provide a more suitable philosophical environment for developing knowledge acquisition and representation tools. We outline the features desirable in tools based on this philosophy and the progress we have made towards developing such tools.
intelligent information systems | 1995
Brian R. Gaines; Paul Compton
A methodology forthe modeling of large data sets is described which results in rule sets having minimal inter-rule interactions, and being simply maintained. An algorithm for developing such rule sets automatically is described and its efficacy shown with standard test data sets. Comparative studies of manual and automatic modeling of a data set of some nine thousand five hundred cases are reported. A study is reported in which ten years of patient data have been modeled on a month by month basis to determine how well a diagnostic system developed by automated induction would have performed had it been in use throughout the project.
australian joint conference on artificial intelligence | 1990
Paul Compton; R. Jansen
Knowledge engineering, obtaining knowledge from experts and incorporating it into expert systems is difficult and time consuming. We suggest that these difficulties arise because experts never report on how they reach a decision, rather they justify why the decision is correct. These justifications vary markedly with the context in which they are required, but in context they are accurate and adequate; the difficulties in knowledge engineering arise from taking the justification out of context. We therefore hypothesise that knowledge engineering may be obviated, particularly in the long term maintenance of expert systems, if the rules experts provide are used in the context in which they are given. This paper describes work in progress to test this hypothesis.
Pathology | 1993
Glenn Edwards; Paul Compton; R. Malor; Ashwin Srinivasan; L. Lazarus
Summary Provision of a comprehensive interpretative service is an important challenge facing chemical pathologists. Attempts to automate report interpretation using expert systems have been limited in the past by the difficulties of rule base maintenance. We have applied a novel knowledge acquisition technique, ripple down rules, in the development of PEIRS (Pathology Expert Interpretative Reporting System), a user‐maintained expert system for automating chemical pathology report interpretation. We created over 950 rules for thyroid function tests, arterial blood gases and other test sub‐groups in 9 mths of operation. A staff pathologist performed all maintenance tasks as part of his routine duties without any need for computer programming skills. No clerical staff involvement was required. Duplication of rule addition for reports requiring multiple comments was the only limitation to coverage of other high volume test groups. PEIRS is the first expert system for the automated interpretation of a range of chemical pathology reports which operates in routine use without extra staffing requirements. PEIRS does not require “knowledge engineering” expertise. Thus, the knowledge base is flexible and can be easily maintained and updated by the pathologist. Expert systems based on ripple down rules should enable pathologists to provide a comprehensive automated interpretative service within the context of the total testing process.
The New England Journal of Medicine | 1988
Mark W. Duncan; Paul Compton; L. Lazarus; George A. Smythe
We compared the value of plasma samples with that of 24-hour urine samples in identifying patients with pheochromocytoma among those with hypertension. We employed specific gas chromatographic-mass spectrometric analysis of both urine and plasma for simultaneous assay of norepinephrine and its neuronal metabolite 3,4-dihydroxyphenylglycol (DHPG). The study population consisted of 1086 patients with hypertension, among them 25 patients with proved pheochromocytoma. Reference ranges for free norepinephrine and DHPG in plasma and urine were established. Measurement of free norepinephrine in 24-hour urine samples provided the best index of a pheochromocytoma. This technique had 100 percent sensitivity and 98 percent specificity among 1192 urine samples, as compared with 82 percent sensitivity and 95 percent specificity among 358 plasma samples. Simultaneous measurement of norepinephrine and DHPG in urine further improved specificity (to 99 percent), but the use of the ratio of norepinephrine to DHPG reduced sensitivity (to 95 percent), since some patients with pheochromocytoma secrete large amounts of DHPG. We therefore recommend measurement of 24-hour urinary levels of free norepinephrine for the diagnosis of pheochromocytoma and suggest that simultaneous analysis for DHPG may sometimes prove useful in reducing the rate of false positive results.
australasian joint conference on artificial intelligence | 2010
Xiongcai Cai; Michael Bain; Alfred Krzywicki; Wayne Wobcke; Yang Sok Kim; Paul Compton; Ashesh Mahidadia
Predicting people other people may like has recently become an important task in many online social networks. Traditional collaborative filtering approaches are popular in recommender systems to effectively predict user preferences for items. However, in online social networks people have a dual role as both “users” and “items”, e.g., both initiating and receiving contacts. Here the assumption of active users and passive items in traditional collaborative filtering is inapplicable. In this paper we propose a model that fully captures the bilateral role of user interactions within a social network and formulate collaborative filtering methods to enable people to people recommendation. In this model users can be similar to other users in two ways – either having similar “taste” for the users they contact, or having similar “attractiveness” for the users who contact them. We develop SocialCollab, a novel neighbour-based collaborative filtering algorithm to predict, for a given user, other users they may like to contact, based on user similarity in terms of both attractiveness and taste. In social networks this goes beyond traditional, merely taste-based, collaborative filtering for item selection. Evaluation of the proposed recommender system on datasets from a commercial online social network show improvements over traditional collaborative filtering.
Applied Artificial Intelligence | 1997
Byeong Ho Kang; Kenichi Yoshida; Hiroshi Motoda; Paul Compton
Automated help desk systems should retrieve exactly the information required to assist a user as quickly and as easily as possible be it for a lay user who knows little about the domain or for an advanced user who requires more specialized information. Automated help desk systems should also be easily maintainable, as knowledge in domains where help is required often changes very rapidly, for example, help for computer users. The aim of this study was to develop a help desk information retrieval mechanism suitable for a wide range of users and to provide a way of easily maintaining the system. The prototype developed for use over the World Wide Web combines keyword search and case-based reasoning to provide both rapid focusing on a part of the help information and guided interaction when the user is unclear about appropriate keywords. Ease of maintenance is provided by using multiple classification ripple down rules (MCRDR) to maintain the domain knowledge in the system. Further issues that arise include ...
Artificial Intelligence in Medicine | 1997
Tim Menzies; Paul Compton
It is difficult to assess hypothetical models in poorly measured domains such as neuroendocrinology. Without a large library of observations to constrain inference, the execution of such incomplete models implies making assumptions. Mutually exclusive assumptions must be kept in separate worlds. We define a general abductive multiple-worlds engine that assesses such models by (i) generating the worlds and (ii) tests if these worlds contain known behaviour. World generation is constrained via the use of relevant envisionment. We describe QCM, a modeling language for compartmental models that can be processed by this inference engine. This tool has been used to find faults in theories published in international refereed journals; i.e. QCM can detect faults which are invisible to other methods. The generality and computational limits of this approach are discussed. In short, this approach is applicable to any representation that can be compiled into an and-or graph, provided the graphs are not too big or too intricate (fanout < 7).
knowledge acquisition, modeling and management | 1993
Paul Compton; Byeong Ho Kang; Phillip Preston; Mary Mulholland
This paper suggests that a distinction between knowledge acquisition methods should be made. On the one hand there are methods which aim to help the expert and knowledge engineer analyse what knowledge is involved in solving a particular type of problem and how this problem solving is carried out. These methods are concerned with classifying the different types of problem solving and providing tools and methods to help the knowledge engineer identify the appropriate approach and ensure nothing is omitted. A different approach to knowledge acquisition focuses on ensuring incremental addition of validated knowledge as mistakes are discovered (validated knowledge here means only that the earlier performance of the system is not degraded by the addition of new knowledge). The organisation of this knowledge is managed by the system rather than the expert and knowledge engineer. This would seem to correspond to human incremental development of expertise. From this perspective task analysis is a secondary activity related to explanation and justification not acknowledge acquisition. Ripple Down Rules is a limited example of this approach. The paper considers the possibility of extending this approach to make it more generally applicable.
international conference on knowledge capture | 2001
Hendra Suryanto; Paul Compton
Current approaches to building knowledge-based systems propose the development of an ontology as a precursor to building the problem-solver. This paper outlines an attempt to do the reverse and discover interesting ontologies from systems built without the ontology being explicit. In particular the paper considers large classification knowledge bases used for the interpretation of medical chemical pathology results and built using Ripple-Down Rules (RDR). The rule conclusions in these knowledge bases provide free-text interpretations of the results rather than explicit classes. The goal is to discover implicit ontological relationships between these interpretations as the system evolves. RDR allows for incremental development and the goal is that the ontology emerges as the system evolves. The results suggest that approach has potential, but further investigation is required before strong claims can be made.