Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alun D. Preece is active.

Publication


Featured researches published by Alun D. Preece.


Knowledge Engineering Review | 1992

Principles and practice in verifying rule-based systems

Alun D. Preece; Rajjan Shinghal; Aïda Batarekh

This paper surveys the verification of expert system knowledge bases by detecting anomalies. Such anomalies are highly indicative of errors in the knowledge base. The paper is in two parts. The first part describes four types of anomaly: redundancy, ambivalence, circularity, and deficiency. We consider rule bases which are based on first-order logic, and explain the anomalies in terms of the syntax and semantics of logic. The second part presents a review of five programs which have been built to detect various subsets of the anomalies. The four anomalies provide a framework for comparing the capabilities of the five tools, and we highlight the strengths and weaknesses of each approach. This paper therefore provides not only a set of underlying principles for performing knowledge base verification through anomaly detection, but also a survey of the state-of-the-art in building practical tools for carrying out such verification. The reader of this paper is expected to be familiar with first-order logic.


Expert Systems With Applications | 1994

State of the art in automated validation of knowledge-based systems☆

Neli P. Zlatareva; Alun D. Preece

Abstract Validation of Knowledge-Based Systems (KBS) is an important aspect of the overall KBS development process, which aims to assure the systems ability to reach correct conclusion. The objective of this paper is to discuss the desirable functionality of an automated validation tool and to provide a survey of existing methods and tools supporting that functionality. The scope of our discussion is limited to validating the level performance of the KBS as a problem solver, since this is the aspect in which KBS differ most from conventional software; more conventional aspects of system evaluation, such as assessing the “usability” of the system, are not covered. Automated tools are considered in two categories: dynamic and static. Dynamic validation tools are those that measure and, in some cases, refine the level of performance of a KBS using a suite of test cases. Use of such tools assumes that an adequate set of real test cases is available. Static validation tools are used to create test cases by making use of domain knowledge already embodied in the KBS or meta-knowledge. Such tools are used when an inadequate set of test cases is available.


Expert Systems With Applications | 1992

Verifying expert systems: A logical framework and a practical tool

Alun D. Preece; Rajjan Shinghal; Aïda Batarekh

Abstract The first step in establishing the reliability of an expert system is to verify that its knowledge base is free from anomalies such as redundant, conflicting or missing knowledge. Such anomalies are suggestive of errors in the knowledge base; empirical evidence suggests that at errors revealed by verification may be hard to detect by conventional system testing. Verification can be performed automatically by a domain-independent anomaly checking tool because the anomalies can be formally defined in terms of logic and detected by syntactic inspection of logic-based knowledge bases. This paper provides not only a formal framework for verification in the form of a set of anomaly definitions expressed in first-order predicate logic, but also detailed descriptions and algorithms for an automatic verification program called COVER. COVER offers a number of advantages over previous anomaly detection tools: firstly, it detects a wider range of anomalies; secondly, it incorporates a number of novel features which afford its users more flexibility to make the checking task more practical for large knowledge bases. A familiarity with first-order predicate logic is assumed for the reader of this paper.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1993

A new approach to detecting missing knowledge in expert system rule bases

Alun D. Preece

Abstract Two of the most important and difficult tasks in building expert systems are knowledge acquisition (KA) and quality assurance (QA). QA involves verification and validation (VV focuses the search for meaningful deficiencies; integrates closely with checks for redundancy, conflicts and circularity; maximizes user-control over deficiency detection; and overcomes the combinatorial explosion traditionally associated with the deficiency check. The paper describes how COVER uses heuristics about the nature of likely deficiencies to improve its performance and clarify reporting of deficiencies to the user. COVER performance is analysed in detail, both theoretically and on real-world expert system knowledge bases.


Expert Systems With Applications | 1991

Specifying an expert system

Aïda Batarekh; Alun D. Preece; Anne Bennett; Peter Grogono

Abstract The success of numerous expert systems in practical applications warrants a more formal approach to their development and evaluation. Reliability assurance of expert systems requires a methodology for the specification and evaluation of these systems. Expert systems are a new class of software system, but some traditional techniques of software development may be adapted to their construction. However, the specification of an expert system differs from that of a more traditional software program in that parts of the specification are permitted to be only partially described when development starts. Specifications have two important purposes: as contracts between suppliers and clients, and as blueprints for implementation. A specification consists of a problem specification and a solution specification. The problem specification plays the role of contract and states explicitly what the problem to be solved is, and the constraints that the final product must satisfy. The solution specification plays the role of blueprint and has two major aspects: analyzing how a human expert solves the problem, and proposing an equivalent automated solution. We propose an approach to the specification of expert systems that is flexible, yet rigorous enough to cover the important features of a wide range of potential expert system applications. We describe fully each of the components of an expert system specification and we relate specification to the issues of evaluation and maintenance of expert systems.


Applications of Artificial Intelligence IX | 1991

Practical approach to knowledge base verification

Alun D. Preece; Rajjan Shinghal

We consider verifying knowledge bases to three levels of rigor: detection of anomalies, verification of safety properties, and verification of full correctness. We present formal definitions for four classes of anomalies which may be present in knowledge bases expressed using first order logic: redundancy, ambivalence, circularity and deficiency. The definitions are initially given for rule-based systems without uncertainty, but we extend them to consider uncertainty and frame-based knowledge representations. We demonstrate that, although verification of full correctness will not usually be feasible for knowledge-based systems, it is important that their safety properties be verified, and we present a method for doing this based on our definitions of logical anomalies. We demonstrate the validity of this framework by presenting the results of a verification performed on the knowledge base of a working expert system.


Knowledge Based Systems | 1992

Empirical study of expert system development

Alun D. Preece; L. Moseley

Abstract Although a wide range of different kinds of tool is now available for building expert systems, there is little published guidance on how to select appropriate tools for an application. An experiment is described to measure the effectiveness of three different approaches to expert system development, using different types of tool: use of a commercial expert system shell, use of a general-purpose high-level programming language, or use of a custom-built shell. By recording the development effort in each case, and applying metrics to these data, the advantages and disadvantages of each approach are identified.


Expert Systems With Applications | 1992

A survey of evaluation techniques used for expert systems in telecommunications

Peter Grogono; Alun D. Preece; Rajjan Shinghal; Ching Y. Suen

Abstract Hardware and software used within the telecommunications industry must combine great complexity with high reliability. Production and maintenance of communications equipment requires many different kinds of human expertise. There is growing interest in the potential of expert systems to assist, or perhaps to replace, human experts. It is important to ensure that the expert systems are reliable and accurate; consequently, they must be evaluated. We review published experience with expert systems in the telecommunications industry and we propose some principles that we feel could usefully be adopted for their evaluation.


IEEE Transactions on Applications and Industry | 1990

Specification of expert systems

Aïda Batarekh; Alun D. Preece; Anne Bennett; Peter Grogono

The authors focus on the problems of specification of an expert system namely, what needs to be specified, what can be specified and how. Two distinct major roles for a software specification are identified: as a contract between parties involved in system development and as a blueprint for the design and implementation of the system. It is shown that these purposes require quite different specifications. The role of specification as a contract is taken by the problem specification, which essentially describes what system is to be built. The blueprint specification is complementary, and describes how the system is to be built, including a description of the knowledge to be used and a description of how to represent and reason with that knowledge.<<ETX>>


Expert Systems | 1990

Towards a methodology for evaluating expert systems

Alun D. Preece

Collaboration


Dive into the Alun D. Preece's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neli P. Zlatareva

Central Connecticut State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge