Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eu Jin Teoh is active.

Publication


Featured researches published by Eu Jin Teoh.


IEEE Transactions on Neural Networks | 2008

Hybrid Multiobjective Evolutionary Design for Artificial Neural Networks

Chi Keong Goh; Eu Jin Teoh; Kay Chen Tan

Evolutionary algorithms are a class of stochastic search methods that attempts to emulate the biological process of evolution, incorporating concepts of selection, reproduction, and mutation. In recent years, there has been an increase in the use of evolutionary approaches in the training of artificial neural networks (ANNs). While evolutionary techniques for neural networks have shown to provide superior performance over conventional training approaches, the simultaneous optimization of network performance and architecture will almost always result in a slow training process due to the added algorithmic complexity. In this paper, we present a geometrical measure based on the singular value decomposition (SVD) to estimate the necessary number of neurons to be used in training a single-hidden-layer feedforward neural network (SLFN). In addition, we develop a new hybrid multiobjective evolutionary approach that includes the features of a variable length representation that allow for easy adaptation of neural networks structures, an architectural recombination procedure based on the geometrical measure that adapts the number of necessary hidden neurons and facilitates the exchange of neuronal information between candidate designs, and a microhybrid genetic algorithm (muHGA) with an adaptive local search intensity scheme for local fine-tuning. In addition, the performances of well-known algorithms as well as the effectiveness and contributions of the proposed approach are analyzed and validated through a variety of data set types.


IEEE Transactions on Neural Networks | 2006

Dynamics analysis and analog associative memory of networks with LT neurons

Huajin Tang; Kay Chen Tan; Eu Jin Teoh

The additive recurrent network structure of linear threshold neurons represents a class of biologically-motivated models, where nonsaturating transfer functions are necessary for representing neuronal activities, such as that of cortical neurons. This paper extends the existing results of dynamics analysis of such linear threshold networks by establishing new and milder conditions for boundedness and asymptotical stability, while allowing for multistability. As a condition for asymptotical stability, it is found that boundedness does not require a deterministic matrix to be symmetric or possess positive off-diagonal entries. The conditions put forward an explicit way to design and analyze such networks. Based on the established theory, an alternate approach to study such networks is through permitted and forbidden sets. An application of the linear threshold (LT) network is analog associative memory, for which a simple design method describing the associative memory is suggested in this paper. The proposed design method is similar to a generalized Hebbian approach, but with distinctions of additional network parameters for normalization, excitation and inhibition, both on a global and local scale. The computational abilities of the network are dependent on its nonlinear dynamics, which in turn is reliant upon the sparsity of the memory vectors.


soft computing | 2009

A hybrid evolutionary approach for heterogeneous multiprocessor scheduling

Chi Keong Goh; Eu Jin Teoh; Kay Chen Tan

This article investigates the assignment of tasks with interdependencies in a heterogeneous multiprocessor environment; specific to this problem, task execution time varies depending on the nature of the tasks as well as with the processing element assigned. The solution to this heterogeneous multiprocessor scheduling problem involves the optimization of complete task assignments and processing order between the assigned processors to arrive at a minimum makespan, subject to a precedence constraint. To solve an NP-hard combinatorial optimization problem, as is typified by this problem, this paper presents a hybrid evolutionary algorithm that incorporates two local search heuristics, which exploit the intrinsic structure of the solution, as well as through the use of specialized genetic operators to promote exploration of the search space. The effectiveness and contribution of the proposed features are subsequently validated on a set of benchmark problems characterized by different degrees of communication times, task, and processor heterogeneities. Preliminary results from simulations demonstrate the effectiveness of the proposed algorithm in finding useful schedule sets based on the set of new benchmark problems.


Neurocomputing | 2008

An asynchronous recurrent linear threshold network approach to solving the traveling salesman problem

Eu Jin Teoh; Kay Chen Tan; H. J. Tang; Cheng Xiang; Chi Keong Goh

In this paper, an approach to solving the classical Traveling Salesman Problem (TSP) using a recurrent network of linear threshold (LT) neurons is proposed. It maps the classical TSP onto a single-layered recurrent neural network by embedding the constraints of the problem directly into the dynamics of the network. The proposed method differs from the classical Hopfield network in the update of state dynamics as well as the use of network activation function. Furthermore, parameter settings for the proposed network are obtained using a genetic algorithm, which ensure a stable convergence of the network for different problems. Simulation results illustrate that the proposed network performs better than the classical Hopfield network for optimization.


international joint conference on neural network | 2006

A Columnar Competitive Model with Simulated Annealing for Solving Combinatorial Optimization Problems

Eu Jin Teoh; Huajin Tang; Kay Chen Tan

One of the major drawbacks of the Hopfield network is that when it is applied to certain polytopes of combinatorial problems, such as the traveling salesman problem (TSP), the obtained solutions are often invalid, requiring numerous trial-and-error setting of the network parameters thus resulting in low-computation efficiency. With this in mind, this article presents a columnar competitive model (CCM) which incorporates a winner-takes-all (WTA) learning rule for solving the TSP. Theoretical analysis for the convergence of the CCM shows that the competitive computational neural network guarantees the convergence of the network to valid states and avoids the tedious procedure of determining the penalty parameters. In addition, its intrinsic competitive learning mechanism enables a fast and effective evolving of the network. Simulation results illustrate that the competitive model offers more and better valid solutions as compared to the original Hopfield network.


international symposium on neural networks | 2006

Estimating the number of hidden neurons in a feedforward network using the singular value decomposition

Eu Jin Teoh; Clieng Xiang; Kay Clien Tan

In this letter, we attempt to quantify the significance of increasing the number of neurons in the hidden layer of a feedforward neural network architecture using the singular value decomposition (SVD). Through this, we extend some well-known properties of the SVD in evaluating the generalizability of single hidden layer feedforward networks (SLFNs) with respect to the number of hidden layer neurons. The generalization capability of the SLFN is measured by the degree of linear independency of the patterns in hidden layer space, which can be indirectly quantified from the singular values obtained from the SVD, in a postlearning step. A pruning/growing technique based on these singular values is then used to estimate the necessary number of neurons in the hidden layer. More importantly, we describe in detail properties of the SVD in determining the structure of a neural network particularly with respect to the robustness of the selected model


international symposium on neural networks | 2006

A fast learning algorithm based on layered hessian approximations and the pseudoinverse

Eu Jin Teoh; Cheng Xiang; Kay Chen Tan

In this article, we present a simple, effective method to learning for an MLP that is based on approximating the Hessian using only local information, specifically, the correlations of output activations from previous layers of hidden neurons. This approach of training the hidden layer weights with the Hessian approximation combined with the training of the final output layer of weights using the pseudoinverse [1] yields improved performance at a fraction of the computational and structural complexity of conventional learning algorithms.


Advances in Evolutionary Computing for System Design | 2007

Designing a Recurrent Neural Network-based Controller for Gyro-Mirror Line-of-Sight Stabilization System using an Artificial Immune Algorithm

Ji Hua Ang; Chi Keong Goh; Eu Jin Teoh; Kay Chen Tan


Lecture Notes in Computer Science | 2006

Estimating the Number of Hidden Neurons in a Feedforward Network Using the Singular Value Decomposition

Eu Jin Teoh; Clieng Xiang; Kay Clien Tan


Lecture Notes in Computer Science | 2006

A Fast Learning Algorithm Based on Layered Hessian Approximations and the Pseudoinverse

Eu Jin Teoh; Clieng Xiang; Kay Chen Tan

Collaboration


Dive into the Eu Jin Teoh's collaboration.

Top Co-Authors

Avatar

Kay Chen Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clieng Xiang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Kay Chen Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Cheng Xiang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Kay Clien Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ji Hua Ang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

H. J. Tang

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge