Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Y. C. Lee is active.

Publication


Featured researches published by Y. C. Lee.


Neural Computation | 1992

Learning and extracting finite state automata with second-order recurrent neural networks

C.L. Giles; C. B. Miller; D. Chen; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

We show that a recurrent, second-order neural network using a real-time, forward training algorithm readily learns to infer small regular grammars from positive and negative string training samples. We present simulations that show the effect of initial conditions, training set size and order, and neural network architecture. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. We discuss a quantization algorithm for dynamically extracting finite state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. We then show through simulations that many of the neural net state machines are dynamically stable, that is, they correctly classify many long unseen strings. In addition, some of these extracted automata actually outperform the trained neural network for classification of unseen strings.


international symposium on neural networks | 1990

Function approximation and time series prediction with neural networks

R. D. Jones; Y. C. Lee; C. W. Barnes; Gary William Flake; K. Lee; P. S. Lewis

Neural networks are examined in the context of function approximation and the related field of time series prediction. A natural extension of radial basis nets is introduced. It is found that use of an adaptable gradient and normalized basis functions can significantly reduce the amount of data necessary to train the net while maintaining the speed advantage of a net that is linear in the weights. The local nature of the network permits the use of simple learning algorithms with short memories of earlier training data. In particular, it is shown that a one-dimensional Newton method is quite fast and reasonably accurate


international symposium on neural networks | 1991

Second-order recurrent neural networks for grammatical inference

C.L. Giles; D. Chen; C.B. Miller; H. H. Chen; Guo-Zheng Sun; Y. C. Lee

It is shown that a recurrent, second-order neural network using a real-time, feedforward training algorithm readily learns to infer regular grammars from positive and negative string training samples. Numerous simulations which show the effect of initial conditions, training set size and order, and neuron architecture are presented. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. The authors discuss a quantization algorithm for dynamically extracting finite-state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. It is then shown through simulations that many of the neural net state machines are dynamically stable and correctly classify long unseen strings.<<ETX>>


international symposium on neural networks | 1993

Constructive learning of recurrent neural networks

D. Chen; C.L. Giles; Guo-Zheng Sun; H. H. Chen; Y. C. Lee; M.W. Goudreau

It is difficult to determine the minimal neural network structure for a particular automaton. A large recurrent network in practice is very difficult to train. Constructive or destructive recurrent methods might offer a solution to this problem. It is proved that one current method, recurrent cascade correlation, has fundamental limitations in representation and thus in its learning capabilities. A preliminary approach to circumventing these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully recurrent structure is given. Through simulations it is shown that such a method can learn many types of regular grammars which the recurrent cascade correlation method is unable to learn.<<ETX>>


international symposium on neural networks | 1990

Function approximation with an orthogonal basis net

Y. C. Lee; R.D. Jones; C.W. Barnes; K. Lee

An orthogonal basis net (OrthoNet) is studied for function approximation. The network transfers input space to a new space in which the orthogonal basis function is easy to construct. This net has the advantages of fast and accurate learning and the ability to deal with high-dimensional systems, and it has only one minimum, so that local minima are not attractors for the learning algorithm. The speed and accuracy advantages of the OrthoNet are illustrated for some 1-D and 2-D problems


Physics of fluids. B, Plasma physics | 1989

A study of nonlinear dynamical models of plasma turbulence

S. Qian; Y. C. Lee; H. H. Chen

A dissipative Benjamin–Ono equation is introduced to study turbulence of a magnetic two‐fluid system. An exact nonlinear mode truncation method is applied in which a finite number of poles are used to represent completely the asymptotic behavior of the system. The pole dynamics exhibit a wide range of nonlinear phenomena, including periodic and chaotic orbits. Statistical properties of the resultant turbulent flows such as correlation functions and energy spectra are also studied. The results show that the system can indeed reach the strong turbulent state.


international symposium on neural networks | 1992

Speech recognition using dynamic time warping with neural network trained templates

Ya-Dong Liu; Y. C. Lee; H. H. Chen; Guo-Zheng Sun

A dynamic time warping based speech recognition system with neural network trained templates is proposed. The algorithm for training the templates is derived based on minimizing classification error of the speech classifier. A speaker-independent isolated digit recognition experiment is conducted and achieves a 0.89% average recognition error rate with only one template for each digit, indicating that the derived templates are able to capture the speaker-invariant features of speech signals. Both nondiscriminative and discriminative versions of the neural net template training algorithm are considered. The former is based on maximum likelihood estimation. The latter is based on minimizing classification error. It is demonstrated through experiments that the discriminative training algorithm is far superior to the nondiscriminative one, providing both smaller recognition error rate and greater discrimination power. Experiments using different feature representation schemes are considered. It is demonstrated that the combination of the feature vector and the delta feature vector yields the best recognition result.<<ETX>>


Journal of Mathematical Physics | 1990

A turbulence model with stochastic soliton motion

S. Qian; H. H. Chen; Y. C. Lee

A dissipative Benjamin–Ono equation is used to study fluid and plasma turbulence. The system is studied by an exact nonlinear mode truncation method in which a finite number of poles are used to present the solution. The justification of the pole expansion approach is discussed with the proof of a completeness theorem. The stability and spectrum analysis show that asymptotic behavior of the system is completely represented by a finite number of nonlinear modes. The behavior of those nonlinear modes resembles solitons, and exhibits a wide range of bifurcation phenomena and routes to turbulence.


international symposium on neural networks | 1992

Time warping recurrent neural networks and trajectory classification

Guo-Zheng Sun; H. H. Chen; Y. C. Lee; Ya-Dong Liu

The authors propose a model of a time warping recurrent neural network (TWRNN) to handle temporal pattern classification where severely time warped and deformed data may occur. This model is shown to have built-in time warping ability. The authors analyze the properties of TWRNN and show that for trajectory classification it has several advantages over such schemes as dynamic programming, hidden Markov models, time-delayed neural networks, and neural network finite automata. A numerical example of trajectory classification is presented. This problem, making a feature of variable sampling rates, having internal states, continuous dynamics, heavily time-warped data, and deformed phase space trajectories, is shown to be difficult for the other schemes. The TWRNN has learned it easily. The authors also trained it with TDNN and failed.<<ETX>>


Physics of fluids. B, Plasma physics | 1991

THE EFFECT OF INDUCED SPATIAL INCOHERENCE ON THE ABSOLUTE RAMAN INSTABILITY

P. N. Guzdar; W. Tan; Y. C. Lee; C. S. Liu; R. H. Lehmberg

A numerical and analytical study of the Raman instability in a homogeneous plasma is presented in which the pump has been modeled to include the effects of broad bandwidth and the induced spatial incoherence (ISI) method of beam smoothing. For a time‐averaged homogeneous growth rate γ0 and a bandwidth σ, there is a significant reduction in Raman backscattering when σ≳2γ0, for γ20 near threshold intensity. However, for γ20 very large compared to the threshold, neither ISI nor bandwidth affects Raman scattering.

Collaboration


Dive into the Y. C. Lee's collaboration.

Top Co-Authors

Avatar

C. Lee Giles

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary D. Doolen

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C.W. Barnes

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Hudong Chen

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark W. Goudreau

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

R. H. Lehmberg

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

R.D. Jones

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

T Maxwell

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge