Roy Leitch
Heriot-Watt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roy Leitch.
systems man and cybernetics | 1993
Qiang Shen; Roy Leitch
An approach is described that utilizes fuzzy sets to develop a fuzzy qualitative simulation algorithm that allows a semiquantitative extension to qualitative simulation, providing three significant advantages over existing techniques. Firstly, it allows a more detailed description of physical variables, through an arbitrary, but finite, discretisation of the quantity space. The adoption of fuzzy sets also allows common-sense knowledge to be represented in defining values through the use of graded membership, enabling the subjective element in system modelling to be incorporated and reasoned with in a formal way. Secondly, the fuzzy quantity space allows more detailed description of functional relationships in that both strength and sign information can be represented by fuzzy relations holding against two or multivariables. Thirdly, the quantity space allows ordering information on rates of change to be used to compute temporal durations of the state and the possible transitions. Thus, an ordering of the evolution of the states and the associated temporal durations are obtained. This knowledge is used to develop an effective temporal filter that significantly reduces the number of spurious behaviors. >
Artificial Intelligence in Engineering | 1998
Mike J. Chantler; G.M. Coghill; Qiang Shen; Roy Leitch
We present a methodology for the selection of candidate generation and prediction techniques for model-based diagnostic systems (MBDS). We start by describing our taxonomy of the solution space based upon the three main functional blocks of a top-level MBDS architecture (the predictor, the candidate generator and the diagnostic strategist). We divide the corresponding problem space into user requirements and system constraints which are further subdivided into task and fault requirements, and plant and domain knowledge constraints respectively. Finally we propose a set of guidelines for selecting tools and techniques in the solution space given descriptions of diagnostic tasks in the problem space.
Artificial Intelligence in Engineering | 1989
Roy Leitch; Alberto Stefanini
Abstract This paper describes the development of a composite software environment, termed Toolkit, for supporting the design and development of Knowledge Based Systems in the domain of Industrial Automation. It presents the design of the tools and the motivation behind them, and the supporting methodology that allows generic problems to be associated with the functionality provided by the respective tools. The Toolkit allows both empirical and theoretical knowledge to be represented, the latter by an implementation of qualitative modelling techniques based on a component-centred ontology. Further, languages are provided for representing (empirical) knowledge in either a declarative or a procedural format. The Toolkit is organized as a task-dependent architecture consisting of five conceptual layers: strategic, tactical, teleological, functional and object. The tools are defined by a systematic task classification and are constructed from a set of tool components consisting of the representation languages and their associated inference mechanisms. In addition, other tool components include the provision of truth maintenance and causal ordering. An overview of the representation languages is given together with a description of the current tools of the Toolkit. An example is given of using two of the tools to build a model-based diagnostic reasoner, using the component based language, and constraint propagation and assumption-based truth maintenance. Finally, a discussion of the current work on the Toolkit is presented.
Artificial Intelligence in Engineering | 1995
Qiang Shen; Roy Leitch
Abstract This paper presents several innovations in the development of model-based diagnostic systems for diagnosing faults in continuous dynamic physical systems. The approach utilises recent developments in qualitative simulation techniques to cope with the inherent lack of modelling knowledge and to provide a qualitative description of the dynamic behaviour. In particular, techniques for the synchronous tracking of the model-based predictions and the evolution of the physical system between equilibria are developed. A discrepancy metric is defined that allows for the continuous degradation of the system behaviour from normal to faulty to be detected. And, most fundamentally, a method for iteratively searching through the space of possible model variations is presented. This provides explicit feedback from detected discrepancies to model adjustments and has the important advantage of reducing the sensitivity to modelling errors and approximate fault models. In the limit, no fault models are required. However, if available these can be used to initialise the search. An example is included which outlines the basic approach discussed in this paper.
intelligent tutoring systems | 1992
Julie-Ann Sime; Roy Leitch
This paper describes an intelligent learning environment based on multiple models, both quantitative and qualitative, of a complex physical system. A trainee can learn the use of multiple models, in reasoning about the behaviour of the system, through a process of cognitive apprenticeship. The trainee can solve problems or observe the expert demonstrate problem solving using multiple models, switching between them as and when necessary. The dimensions along which these models vary are defined and example training scenarios provided.
Artificial Intelligence in Engineering | 1999
Mohan Ravindranathan; Roy Leitch
Abstract This paper demonstrates the use of multiple models in intelligent control systems where models are organised within a model space of three primitive modelling dimensions: precision , scope and generality . This approach generates a space of models to extend the operating range of control systems. Within this model space, the selection of the most appropriate model to use in a given situation is determined through a reasoning strategy consisting of a set of model switching rules. These are based on using the most efficient, but least general models first and then incrementally increasing the generality and scope until a satisfactory model is found. This methodology has culminated in a multi-model intelligent control system architecture that trades-off efficiency with generality, an approach apparent in human problem solving. The architecture allows learning of successful adaptations through model refinement and the subsequent direct use of refined models in similar situations in the future. Examples using models of a laboratory-scale process rig illustrates the adaptive reasoning and learning process of multi-model intelligent control systems.
Artificial Intelligence in Engineering | 1992
Qiang Shen; Roy Leitch
Abstract Recently, much interest has been generated in developing less abstract quantity spaces for qualitative reasoners in an attempt to reduce the fundamental, and essential, ambiguities at source. Several researchers have utilised the theory of non-standard analysis (NSA) as a mathematical underpinning to establish techniques for ‘order of magnitude reasoning’ (OMR). These are significant developments and result in the elimination of many spurious behaviours. However, several problems exist with these approaches, in particular, when the qualitative description is taken to be an abstraction of an underlying real-valued representation. This research note discusses some limitations preventing the consistent use of OMR when compared to the real-valued case. Further, it is argued that the intuition behind the OMR approach implies graded set membership for representing quantities and the relations between them, rather than the crisp sets supporting NSA. Finally, the note indicates how the theory of fuzzy sets can be used to consolidate and extend the advances made by OMR.
Annals of Mathematics and Artificial Intelligence | 1994
Roy Leitch; Mike J. Chantler; Qiang Shen; G.M. Coghill
This paper sets out to provide a basis for a specification methodology for modelbased diagnostic systems (MBDS). The purpose of the methodology is to provide a mapping from the problem space of possible diagnostic applications to the solution space provided by the various approaches to MBDS. Therefore, given the major characteristics of a diagnostic problem, the methodology should provide guide-lines by which the specification of a suitable MBDS may be determined. As a first stage in the development of this methodology we provide taxonomies of the problem and solution spaces. The former is characterised via a set ofProblem requirements, divided intoTask, Fault andModel requirements, while the latter is classified as a set ofSystem specifications, with reference to the major functional blocks of MBDS: namely theDiagnostic Strategist, thePredictor and theCandidate Proposer. The last part of this paper proposes a mapping between these two multi-dimensional spaces. This mapping is preliminary, and therefore neither exhaustive nor exclusive, but together with the taxonomies of the problem requirements and system specification will provide a catalyst for the development of a more extensive methodology.
Computer Education | 1993
Julie-Ann Sime; Roy Leitch
Abstract This paper describes the basis of a specification methodology for building Intelligent Training Systems in Industrial Environments (ESPRIT project 2615, ITSIE). The specification methodology determines the mapping of specific training requirements onto tools and techniques for implementation based upon an analysis of the training requirements and of the nature of the domain knowledge. A task analysis is carried out on the training requirements to determine a number of specific training objectives. These objectives are described in terms of the required level of behaviour and knowledge necessary to achieve the objective. This description leads to the determination of a primitive mode of instruction, which promotes: rote, inductive or deductive learning. This decomposition is used to identify relevant tools and techniques for domain knowledge representations and for didactics and diagnosis within the tutor of the training system.
decision support systems | 1995
G.J Wyatt; Roy Leitch; A.D Steele
Abstract Traditional quantitative methods of analysis and simulation are compared with recently developed techniques in qualitative simulation by using as a case-study a simple dynamic model of the interacting markets for housing and mortgages. Analysis by the different techniques shows that while the qualitative simulation requires less detailed models, of the precision normally available in practice, it results in ambiguous descriptions of behaviour that for certain initial conditions can obscure the true behaviour. By contrast, quantitative simulation produces a unique precise behaviour, but in requiring excessively specific information of the modeller it may produce an inaccurate if precise outcome.