Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C.L. Giles is active.

Publication


Featured researches published by C.L. Giles.


Neural Computation | 1992

Learning and extracting finite state automata with second-order recurrent neural networks

C.L. Giles; C. B. Miller; D. Chen; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

We show that a recurrent, second-order neural network using a real-time, forward training algorithm readily learns to infer small regular grammars from positive and negative string training samples. We present simulations that show the effect of initial conditions, training set size and order, and neural network architecture. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. We discuss a quantization algorithm for dynamically extracting finite state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. We then show through simulations that many of the neural net state machines are dynamically stable, that is, they correctly classify many long unseen strings. In addition, some of these extracted automata actually outperform the trained neural network for classification of unseen strings.


IEEE Transactions on Signal Processing | 1997

A delay damage model selection algorithm for NARX neural networks

Tsung-Nan Lin; C.L. Giles; Bill G. Horne; Sun-Yuan Kung

Recurrent neural networks have become popular models for system identification and time series prediction. Nonlinear autoregressive models with exogenous inputs (NARX) neural network models are a popular subclass of recurrent networks and have been used in many applications. Although embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction.


international symposium on neural networks | 1991

Second-order recurrent neural networks for grammatical inference

C.L. Giles; D. Chen; C.B. Miller; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

It is shown that a recurrent, second-order neural network using a real-time, feedforward training algorithm readily learns to infer regular grammars from positive and negative string training samples. Numerous simulations which show the effect of initial conditions, training set size and order, and neuron architecture are presented. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. The authors discuss a quantization algorithm for dynamically extracting finite-state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. It is then shown through simulations that many of the neural net state machines are dynamically stable and correctly classify long unseen strings.<<ETX>>


international symposium on neural networks | 1993

Rule refinement with recurrent neural networks

C.L. Giles; C.W. Omlin

Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFAs) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, it is shown that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known nontrivial randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but they are able to correct through training inserted prior knowledge which was wrong. By wrong, it is meant that the inserted rules were not the ones in the randomly generated grammar.<<ETX>>


international symposium on neural networks | 1992

Heuristics for the extraction of rules from discrete-time recurrent neural networks

C.W. Omlin; C.L. Giles; C.B. Miller

It is pointed out that discrete recurrent neural networks can learn to classify long strings of a regular language correctly when trained on a small finite set of positive and negative example strings. Rules defining the learned grammar can be extracted from networks by applying clustering heuristics in the output space of recurrent state neurons. Empirical evidence that there exists a correlation between the generalization performance of recurrent neural networks for regular language recognition and the rules that can be extracted from a neural network is presented. A heuristic that makes it possible to extract good rules from trained networks is given, and the method is tested on networks that are trained to recognize a simple regular language.<<ETX>>


international symposium on neural networks | 1994

Constructing deterministic finite-state automata in sparse recurrent neural networks

C.W. Omlin; C.L. Giles

Presents an algorithm for encoding deterministic finite-state automata in sparse recurrent neural networks with sigmoidal discriminant functions and second-order weights. The authors prove that for particular weight strength values the regular languages accepted by DFAs and the constructed networks are identical.<<ETX>>


Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop | 1997

Remembering the past: the role of embedded memory in recurrent neural network architectures

C.L. Giles; Tsung-Nan Lin; Bill G. Horne

There has been much interest in learning long-term temporal dependencies with neural networks. Adequately learning such long-term information can be useful in many problems in signal processing, control and prediction. A class of recurrent neural networks (RNNs), NARX neural networks, were shown to perform much better than other recurrent neural networks when learning simple long-term dependency problems. The intuitive explanation is that the output memories of a NARX network can be manifested as jump-ahead connections in the time-unfolded network. Here we show that similar improvements in learning long-term dependencies can be achieved with other classes of recurrent neural network architectures simply by increasing the order of the embedded memory. Experiments with locally recurrent networks, and NARX (output feedback) networks show that all of these classes of network architectures can have a significant improvement on learning long-term dependencies as the orders of embedded memory are increased, other things be held constant. These results can be important to a user comfortable with a specific recurrent neural network architecture because simply increasing the embedding memory order of that architecture will make it more robust to the problem of long-term dependency learning.


international symposium on neural networks | 1990

Recurrent neural networks, hidden Markov models and stochastic grammars

Guo-Zheng Sun; H. H. Chen; Y. C. Lee; C.L. Giles

A discussion is presented of the advantage of using a linear recurrent network to encode and recognize sequential data. The hidden Markov model (HMM) is shown to be a special case of such linear recurrent second-order neural networks. The Baum-Welch reestimation formula, which has proved very useful in training HMM, can also be used to learn a linear recurrent network. As an example, a network has successfully learned the stochastic Reber grammar with only a few hundred sample strings in about 14 iterations. The relative merits and limitations of the Baum-Welch optimal ascent algorithm in comparison with the error correction-gradient descent-learning algorithm are discussed


international symposium on neural networks | 1998

The past is important: a method for determining memory structure in NARX neural networks

C.L. Giles; Tsung-Nan Lin; B.G. Horne; S.Y. Kung

Recurrent networks have become popular models for system identification and time series prediction. NARX (nonlinear autoregressive models with exogenous inputs) network models are a popular subclass of recurrent networks and have been used in many applications. Though embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that the use of intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction.


IEEE Transactions on Neural Networks | 1996

Learning long-term dependencies in NARX recurrent neural networks

Tsung-Nan Lin; Bill G. Horne; Peter Tino; C.L. Giles

Collaboration


Dive into the C.L. Giles's collaboration.

Top Co-Authors

Avatar

Tsung-Nan Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Tino

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge