Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teck-Hou Teng is active.

Publication


Featured researches published by Teck-Hou Teng.


Expert Systems With Applications | 2009

Modelling situation awareness for Context-aware Decision Support

Yu-Hong Feng; Teck-Hou Teng; Ah-Hwee Tan

Situation awareness modelling is popularly used in the command and control domain for situation assessment and decision support. However, situation models in real-world applications are typically complex and not easy to use. This paper presents a Context-aware Decision Support (CaDS) system, which consists of a situation model for shared situation awareness modelling and a group of entity agents, one for each individual user, for focused and customized decision support. By incorporating a rule-based inference engine, the entity agents provide functions including event classification, action recommendation, and proactive decision making. The implementation and the performance of the proposed system are demonstrated through a case study on a simulated command and control application.


IEEE Transactions on Neural Networks | 2015

Self-Organizing Neural Networks Integrating Domain Knowledge and Reinforcement Learning

Teck-Hou Teng; Ah-Hwee Tan; Jacek M. Zurada

The use of domain knowledge in learning systems is expected to improve learning efficiency and reduce model complexity. However, due to the incompatibility with knowledge structure of the learning systems and real-time exploratory nature of reinforcement learning (RL), domain knowledge cannot be inserted directly. In this paper, we show how self-organizing neural networks designed for online and incremental adaptation can integrate domain knowledge and RL. Specifically, symbol-based domain knowledge is translated into numeric patterns before inserting into the self-organizing neural networks. To ensure effective use of domain knowledge, we present an analysis of how the inserted knowledge is used by the self-organizing neural networks during RL. To this end, we propose a vigilance adaptation and greedy exploitation strategy to maximize exploitation of the inserted domain knowledge while retaining the plasticity of learning and using new knowledge. Our experimental results based on the pursuit-evasion and minefield navigation problem domains show that such self-organizing neural network can make effective use of domain knowledge to improve learning efficiency and reduce model complexity.


international symposium on neural networks | 2012

Self-organizing neural networks for learning air combat maneuvers

Teck-Hou Teng; Ah-Hwee Tan; Yuan-Sin Tan; Adrian Yeo

This paper reports on an agent-oriented approach for the modeling of adaptive doctrine-equipped computer generated force (CGF) using a commercial-grade simulation platform known as CAE STRIVE®CGF. A self-organizing neural network is used for the adaptive CGF to learn and generalize knowledge in an online manner during the simulation. The challenge of defining the state space and action space and the lack of domain knowledge to initialize the adaptive CGF are addressed using the doctrine used to drive the non-adaptive CGF. The doctrine contains a set of specialized knowledge for conducting 1-v-1 dogfights. The hierarchical structure and symbol representation of the propositional rules are incompatible to the self-organizing neural network. Therefore, it has to be flattened and then translated to vector pattern before it can inserted into the self-organizing neural network. The state space and action space are automatically extracted using the flattened doctrine as well. Experiments are conducted using several initial conditions in round robin fashions. The experimental results show that the selforganizing neural network is able to make good use of the domain knowledge with complex knowledge structure to discover the knowledge to out-maneuver the doctrine-driven CGF consistently in an efficient manner.


international symposium on neural networks | 2008

Self-organizing neural models integrating rules and reinforcement learning

Teck-Hou Teng; Zhong-Ming Tan; Ah-Hwee Tan

Traditional approaches to integrating knowledge into neural network are concerned mainly about supervised learning. This paper presents how a family of self-organizing neural models known as fusion architecture for learning, cognition and navigation (FALCON) can incorporate a priori knowledge and perform knowledge refinement and expansion through reinforcement learning. Symbolic rules are formulated based on pre-existing know-how and inserted into FALCON as a priori knowledge. The availability of knowledge enables FALCON to start performing earlier in the initial learning trials. Through a temporal-difference (TD) learning method, the inserted rules can be refined and expanded according to the evaluative feedback signals received from the environment. Our experimental results based on a minefield navigation task have shown that FALCON is able to learn much faster and attain a higher level of performance earlier when inserted with the appropriate a priori knowledge.


web intelligence | 2012

Knowledge-Based Exploration for Reinforcement Learning in Self-Organizing Neural Networks

Teck-Hou Teng; Ah-Hwee Tan

Exploration is necessary during reinforcement learning to discover new solutions in a given problem space. Most reinforcement learning systems, however, adopt a simple strategy, by randomly selecting an action among all the available actions. This paper proposes a novel exploration strategy, known as Knowledge-based Exploration, for guiding the exploration of a family of self-organizing neural networks in reinforcement learning. Specifically, exploration is directed towards unexplored and favorable action choices while steering away from those negative action choices that are likely to fail. This is achieved by using the learned knowledge of the agent to identify prior action choices leading to low


web intelligence | 2008

Cognitive Agents Integrating Rules and Reinforcement Learning for Context-Aware Decision Support

Teck-Hou Teng; Ah-Hwee Tan

Q


Expert Systems With Applications | 2013

Adaptive computer-generated forces for simulator-based training

Teck-Hou Teng; Ah-Hwee Tan; Loo-Nin Teow

-values in similar situations. Consequently, the agent is expected to learn the right solutions in a shorter time, improving overall learning efficiency. Using a Pursuit-Evasion problem domain, we evaluate the efficacy of the knowledge-based exploration strategy, in terms of task performance, rate of learning and model complexity. Comparison with random exploration and three other heuristic-based directed exploration strategies show that Knowledge-based Exploration is significantly more effective and robust for reinforcement learning in real time.


Procedia Computer Science | 2012

Self-Regulating Action Exploration in Reinforcement Learning

Teck-Hou Teng; Ah-Hwee Tan; Yuan-Sin Tan

While context-awareness has been found to be effective for decision support in complex domains, most of such decision support systems are hard-coded, incurring significant development efforts. To ease the knowledge acquisition bottleneck, this paper presents a class of cognitive agents based on self-organizing neural model known as TD-FALCON that integrates rules and learning for supporting context-aware decision making. Besides the ability to incorporate a priori knowledge in the form of symbolic propositional rules, TD-FALCON performs reinforcement learning (RL), enabling knowledge refinement and expansion through the interaction with its environment. The efficacy of the developed Context-Aware Decision Support (CaDS) system is demonstrated through a case study of command and control in a virtual environment.


international symposium on neural networks | 2015

A comparative study between motivated learning and reinforcement learning

James T. Graham; Janusz A. Starzyk; Zhen Ni; Haibo He; Teck-Hou Teng; Ah-Hwee Tan

Abstract Simulator-based training is in constant pursuit of increasing level of realism. The transition from doctrine-driven computer-generated forces (CGF) to adaptive CGF represents one such effort. The use of doctrine-driven CGF is fraught with challenges such as modeling of complex expert knowledge and adapting to the trainees’ progress in real time. Therefore, this paper reports on how the use of adaptive CGF can overcome these challenges. Using a self-organizing neural network to implement the adaptive CGF, air combat maneuvering strategies are learned incrementally and generalized in real time. The state space and action space are extracted from the same hierarchical doctrine used by the rule-based CGF. In addition, this hierarchical doctrine is used to bootstrap the self-organizing neural network to improve learning efficiency and reduce model complexity. Two case studies are conducted. The first case study shows how adaptive CGF can converge to the effective air combat maneuvers against rule-based CGF. The subsequent case study replaces the rule-based CGF with human pilots as the opponent to the adaptive CGF. The results from these two case studies show how positive outcome from learning against rule-based CGF can differ markedly from learning against human subjects for the same tasks. With a better understanding of the existing constraints, an adaptive CGF that performs well against rule-based CGF and human subjects can be designed.


international symposium on neural networks | 2014

Integrating self-organizing neural network and Motivated Learning for coordinated multi-agent reinforcement learning in multi-stage stochastic game

Teck-Hou Teng; Ah-Hwee Tan; Janusz A. Starzyk; Yuan-Sin Tan; Loo-Nin Teow

Abstract The basic tenet of a learning process is for an agent to learn for only as much and as long as it is necessary. With reinforcement learning, the learning process is divided between exploration and exploitation. Given the complexity of the problem domain and the randomness of the learning process, the exact duration of the reinforcement learning process can never be known with certainty. Using an inaccurate number of training iterations leads either to the non-convergence or the over-training of the learning agent. This work addresses such issues by proposing a technique to self-regulate the exploration rate and training du–ration leading to convergence effciently. The idea originates from an intuitive understanding that exploration is only necessary when the success rate is low. This means the rate of exploration should be conducted in inverse proportion to the rate of suc–cess. In addition, the change in exploration-exploitation rates alters the duration of the learning process. Using this approach, the duration of the learning process becomes adaptive to the updated status of the learning process. Experimental results from the K -Armed Bandit and Air Combat Maneuver scenario prove that optimal action policies can be discovered using the right amount of training iterations. In essence, the proposed method eliminates the guesswork on the amount of exploration needed during reinforcement learning.

Collaboration


Dive into the Teck-Hou Teng's collaboration.

Top Co-Authors

Avatar

Ah-Hwee Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yuan-Sin Tan

DSO National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Loo-Nin Teow

DSO National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ee-Luang Ang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Wee-Sze Ong

DSO National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Yu-Hong Feng

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Zhong-Ming Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Haibo He

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge