Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guo-Zheng Sun is active.

Publication


Featured researches published by Guo-Zheng Sun.


Neural Computation | 1992

Learning and extracting finite state automata with second-order recurrent neural networks

C.L. Giles; C. B. Miller; D. Chen; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

We show that a recurrent, second-order neural network using a real-time, forward training algorithm readily learns to infer small regular grammars from positive and negative string training samples. We present simulations that show the effect of initial conditions, training set size and order, and neural network architecture. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. We discuss a quantization algorithm for dynamically extracting finite state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. We then show through simulations that many of the neural net state machines are dynamically stable, that is, they correctly classify many long unseen strings. In addition, some of these extracted automata actually outperform the trained neural network for classification of unseen strings.


international symposium on neural networks | 1991

Second-order recurrent neural networks for grammatical inference

C.L. Giles; D. Chen; C.B. Miller; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

It is shown that a recurrent, second-order neural network using a real-time, feedforward training algorithm readily learns to infer regular grammars from positive and negative string training samples. Numerous simulations which show the effect of initial conditions, training set size and order, and neuron architecture are presented. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. The authors discuss a quantization algorithm for dynamically extracting finite-state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. It is then shown through simulations that many of the neural net state machines are dynamically stable and correctly classify long unseen strings.<<ETX>>


international symposium on neural networks | 1993

Constructive learning of recurrent neural networks

D. Chen; C.L. Giles; Guo-Zheng Sun; H. H. Chen; Y. C. Lee; M.W. Goudreau

It is difficult to determine the minimal neural network structure for a particular automaton. A large recurrent network in practice is very difficult to train. Constructive or destructive recurrent methods might offer a solution to this problem. It is proved that one current method, recurrent cascade correlation, has fundamental limitations in representation and thus in its learning capabilities. A preliminary approach to circumventing these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully recurrent structure is given. Through simulations it is shown that such a method can learn many types of regular grammars which the recurrent cascade correlation method is unable to learn.<<ETX>>


Physica D: Nonlinear Phenomena | 1991

Adaptive stochastic cellular automata: theory

Y. C. Lee; S. Qian; R.D. Jones; C.W. Barnes; G.W. Flake; M.K. O'Rourke; K. Lee; H. H. Chen; Guo-Zheng Sun; Y.Q. Zhang; D. Chen; C.L. Giles

Abstract The mathematical concept of cellular automata has been generalized to allow for the possibility that the uniform local interaction rules that govern conventional cellular automata are replaced by nonuniform local interaction rules which are drawn from the same probability distribution function, in order to guarantee the statistical homogeneity of the cellular automata system. Adaptation and learning in such a system can be accomplished by evolving the probability distribution function along the steepest descent direction of some objective function in a statistically unbiased way to ensure that the cellular automatas dynamical behavior approaches the desired behavior asymptotically. The proposed CA model has been shown mathematically to possess the requisite convergence property under general conditions.


Physica D: Nonlinear Phenomena | 1991

Adaptive stochastic cellular automata: applications

S. Qian; Y. C. Lee; R.D. Jones; C.W. Barnes; G.W. Flake; M.K. O'Rourke; K. Lee; H. H. Chen; Guo-Zheng Sun; Y.Q. Zhang; D. Chen; G. L. Giles

Abstract The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.


international symposium on neural networks | 1992

Speech recognition using dynamic time warping with neural network trained templates

Ya-Dong Liu; Y. C. Lee; H. H. Chen; Guo-Zheng Sun

A dynamic time warping based speech recognition system with neural network trained templates is proposed. The algorithm for training the templates is derived based on minimizing classification error of the speech classifier. A speaker-independent isolated digit recognition experiment is conducted and achieves a 0.89% average recognition error rate with only one template for each digit, indicating that the derived templates are able to capture the speaker-invariant features of speech signals. Both nondiscriminative and discriminative versions of the neural net template training algorithm are considered. The former is based on maximum likelihood estimation. The latter is based on minimizing classification error. It is demonstrated through experiments that the discriminative training algorithm is far superior to the nondiscriminative one, providing both smaller recognition error rate and greater discrimination power. Experiments using different feature representation schemes are considered. It is demonstrated that the combination of the feature vector and the delta feature vector yields the best recognition result.<<ETX>>


international symposium on neural networks | 1992

Time warping recurrent neural networks and trajectory classification

Guo-Zheng Sun; H. H. Chen; Y. C. Lee; Ya-Dong Liu

The authors propose a model of a time warping recurrent neural network (TWRNN) to handle temporal pattern classification where severely time warped and deformed data may occur. This model is shown to have built-in time warping ability. The authors analyze the properties of TWRNN and show that for trajectory classification it has several advantages over such schemes as dynamic programming, hidden Markov models, time-delayed neural networks, and neural network finite automata. A numerical example of trajectory classification is presented. This problem, making a feature of variable sampling rates, having internal states, continuous dynamics, heavily time-warped data, and deformed phase space trajectories, is shown to be difficult for the other schemes. The TWRNN has learned it easily. The authors also trained it with TDNN and failed.<<ETX>>


international symposium on neural networks | 1992

Discriminative training algorithm for predictive neural network models

Ya-Dong Liu; Y. C. Lee; H. H. Chen; Guo-Zheng Sun

A discriminative training algorithm for predictive neural network models is proposed. The algorithm is applied to a speaker independent isolated digit recognition experiment. The recognition error rate is reduced from 2.52% when the classifier is trained with a non-discriminative algorithm to 0.58% when the discriminative algorithm is applied. The increase in classifier discrimination ability is also demonstrated.<<ETX>>


international symposium on neural networks | 1990

Recurrent neural networks, hidden Markov models and stochastic grammars

Guo-Zheng Sun; H. H. Chen; Y. C. Lee; C.L. Giles

A discussion is presented of the advantage of using a linear recurrent network to encode and recognize sequential data. The hidden Markov model (HMM) is shown to be a special case of such linear recurrent second-order neural networks. The Baum-Welch reestimation formula, which has proved very useful in training HMM, can also be used to learn a linear recurrent network. As an example, a network has successfully learned the stochastic Reber grammar with only a few hundred sample strings in about 14 iterations. The relative merits and limitations of the Baum-Welch optimal ascent algorithm in comparison with the error correction-gradient descent-learning algorithm are discussed


Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop | 1991

Nonlinear resampling transformation for automatic speech recognition

Ya-Dong Liu; Y. C. Lee; H. H. Chen; Guo-Zheng Sun

A new technique for speech signal processing called nonlinear resampling transformation (NRT) is proposed. The representation of a speech pattern derived from this technique has two important features: first, it reduces redundancy; second, it effectively removes the nonlinear variations of speech signals in time. The authors have applied NRT to the TI isolated-word database achieving a 99.66% recognition rate on a 10 digits multi-speaker task for a linear predictive neural net classifier. In their experiment, the authors have also found that discriminative training is superior to nondiscriminative training for linear predictive neural network classifiers.<<ETX>>

Collaboration


Dive into the Guo-Zheng Sun's collaboration.

Top Co-Authors

Avatar

C. Lee Giles

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

C.L. Giles

University of Maryland

View shared research outputs
Top Co-Authors

Avatar

C.W. Barnes

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

G.W. Flake

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gary D. Doolen

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

K. Lee

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

M.K. O'Rourke

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark W. Goudreau

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

R.D. Jones

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

S. Qian

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge