Hoon heng Teh
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hoon heng Teh.
computational science and engineering | 1996
Chew Lim Tan; T.S. Quah; Hoon heng Teh
The Neural Logic Network (Neulonet) system models a wide range of human decision making behaviors by combining the strengths of rule based expert systems and neural networks. Neulonet differs from other neural networks by having an ordered pair of numbers associated with each node and connection, as shown. Let Q be the output node and P/sub 1/, P, ..., P/sub N/, be input nodes. Also, let values associated with the node P/sub i/, be denoted by (a/sub i/, b/sub i/,), and the weight for the connection from P/sub i/, to Q be (/spl alpha//sub i/,/spl beta//sub i/,). Each nodes ordered pair takes one of three values-(1,0) for true, (0,1) for false, or (0,0) for dont know; (1,1) is undefined.
Fuzzy Sets and Systems | 1996
Liya Ding; Hoon heng Teh; Pei-Zhuang Wang; Ho Chung Lui
Abstract Research under the name of Neural Logic Networks is an attempt to integrate connectionist models and logic reasoning [8, 9]. With a Neural Logic Network, a simple neural network structure with suitable weight(s) can be used to represent a set of flexible operations, which offer increased possibilities in dealing with inference in real-world problem solving. They also possess useful properties in an extended logic system which is called Neural Logic . One of the important features of Neural Logic is that all its operations can be defined and realized by neural networks, which form Neural Logic Networks. As one part of the research on Neural Logic Networks, fuzzy neural logic programming has been proposed [6]. This paper introduces a Prolog-like inference system based on Neural Logic as an implementation of fuzzy neural logic programming. In this system, fuzzy reasoning is executed by the Neural Logic inference engine with incomplete or uncertain knowledge. The framework of the system and its inference mechanism are described.
Fuzzy Sets and Systems | 1994
Shaohua Tan; Hoon heng Teh; Pei-Zhuang Wang
Abstract Using a matrix to represent a fuzzy similarity relation has commonly been used in the study of fuzzy relations. Here, we introduce an efficient and compact way to represent similarity relations in the form of a sequence. We shall firstly introduce the notion of the normal form of a membership matrix and then establish an algorithm to obtain the normal form of any given membership matrix of a similarity relation. From the normal form of the matrix, a sequential representation of the similarity relation can be constructed. We shall further discuss the relationship between the matrix representation and the sequential representation of fuzzy similarity relations. Both the algorithm and the construction method will be illustrated by examples.
IEEE Transactions on Applications and Industry | 1990
T. J. Reynolds; Hoon heng Teh; Boon Toh Low
The authors propose a programming system that combines pattern matching of Prolog with a novel approach to logic and the control of resolution. A network of nodes and arcs together with a three-valued logic is used to indicate the connections between predicates and their consequents, and to express the flow from facts and propositions of a theory to its theorems. In this way, one can handle uncertainty and negation properly in this neural logic network. A neural logic program consists of a specification of network fragments, labeled with predicates and arc weights, and they can be joined dynamically to form a tree of reasoning chains. The architecture of the neural logic computational model is left open and the authors do not intend the model to be interpreted literally as a physical architecture.<<ETX>>
international workshop on variable structure systems | 1988
Hoon heng Teh; Wellington C. P. Yu
Some basic features of multi-layer perceptrons are sketched. Some of the shortcomings of these perceptrons are pointed out, and ways of retaining their strengths but overcoming their shortcomings are proposed. A class of networks called inference networks is introduced in order to demonstrate that logical reasoning capability can also be modeled using networks. The class of multi layer perceptrons and inference networks in then unified into a single class of networks, called enhanced perceptrons. One important theorem obtained is that for any given pair of pattern-sets, there always exists an enhanced perceptron with only one hidden layer to match the given patterns. The patterns can be image patterns, attribute patterns, or logical patterns. The proof of this theorem is by constructive algorithm. Once a solution is obtained, other solutions with a controlled degree of error tolerance can then be generated through some learning algorithms.<<ETX>>
International Journal of Intelligent Systems | 1998
Wangming Wu; Hoon heng Teh; Bo Yuan
In this article, a new kind of reasoning for propositional knowledge, which is based on the fuzzy neural logic initialed by Teh, is introduced. A fundamental theorem is presented showing that any fuzzy neural logic network can be represented by operations: bounded sum, complement, and scalar product. Propositional calculus of fuzzy neural logic is also investigated. Linear programming problems risen from the propositional calculus of fuzzy neural logic show a great advantage in applying fuzzy neural logic to answer imprecise questions in knowledge‐based systems. An example is reconsidered here to illustrate the theory.
Discrete Mathematics | 1988
Hoon heng Teh; Meng Fang Foo
Publisher Summary This chapter presents large-scale network analysis with applications to transportation, communication and inference networks. The study of large-scale networks has been mainly motivated by practical problems, like transportation problems, reliability problems. The problems usually involve finding the optimal paths in the networks and they are rather similar in nature. These different networks can be unified into a more general form of network, the semiring network. Shier has described an algebraic structure to study reliability problem. Carre has given an excellent description of semiring networks and their properties using matrices. This chapter describes semiring networks and shows how to deal with large matrices. The latter is particularly important because in large-scale networks, even computation on the computer presents some difficulties as the amount of random access memory in every computer is limited and the computation time may be long.
Pattern Recognition Letters | 1996
Joo-Hwee Lim; Hoon heng Teh; Ho-Chung Lui; Pei-Zhuang Wang
We propose a novel approach called Stochastic Topology with Elastic Matching (STEM) for off-line handwritten character recognition. Fitting characters as topological maps, STEM incrementally learns stochastic prototypes from examples with elastic matching. Experimental results on NIST digit database and connection to deformable models are also presented.
IEEE Transactions on Applications and Industry | 1990
Hoon heng Teh; Loke-Soo Hsu; Sing-Chai Chan; Kia-Fock Loe
Neural logic networks are generalized to cater to logical systems where the validity of rules and facts changes with time. To construct a temporal network, the validity of rules and facts is collected at a selection of time instances to determine the connecting weights of the respective instances. The weight of the temporal network is then defined as functions that would produce the known values when the proper time is substituted. Three theorems on temporal pattern recognition are proved.<<ETX>>
conference on tools with artificial intelligence | 1993
Tong-Seng Quah; Chew Lim Tan; Hoon heng Teh
Presents the architecture of a hybrid neural network expert system shell. The system, structured around the concept of network elements, is aimed at preserving the semantic structure of the expert system rules while incorporating the learning capability of neural networks into the inferencing mechanism. Using this architecture, every rule of the knowledge base is represented by a one or two-layer neural network element. These network elements are dynamically linked up to form the rule-tree during the inferencing process. The system is also able to adjust its inference strategy according to different users and situations. An editor is also provided to enable easy maintenance of the neural network rule elements. The shell is housed in a user-friendly rule-based interface. Two applications that are built upon the abovementioned shell are discussed, they demonstrate the strengths of the network element architecture over conventional rule-based systems.