Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian W. Omlin is active.

Publication


Featured researches published by Christian W. Omlin.


Journal of the ACM | 1996

Constructing deterministic finite-state automata in recurrent neural networks

Christian W. Omlin; C. Lee Giles

Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.


Neural Networks | 1996

Extraction of rules from discrete-time recurrent neural networks

Christian W. Omlin; C. Lee Giles

The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFAs the model which best approximates the learned regular grammar.


Connection Science | 1993

Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks

C. Lee Giles; Christian W. Omlin

Abstract Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite slate recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules)and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrec’—rules inconsistent with the training data.


IEEE Transactions on Knowledge and Data Engineering | 1996

Rule revision with recurrent neural networks

Christian W. Omlin; C.L. Giles

Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect (i.e. the rules were not the ones in the randomly generated grammar).


international conference on machine learning | 1992

Training second-order recurrent neural networks using hints

Christian W. Omlin; C. Lee Giles

We investigate a method for inserting rules into discrete-time second-order recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finite-state automaton. Inserting these rules as hints into networks with second-order connections is straightforward. Our simulation results show that even weak hints seem to improve the convergence time by an order of magnitude.


Neural Computation | 1996

Stable encoding of large finite-state automata in recurrent neural networks with sigmoid discriminants

Christian W. Omlin; C. Lee Giles

We propose an algorithm for encoding deterministic finite-state automata (DFAs) in second-order recurrent neural networks with sigmoidal discriminant function and we prove that the languages accepted by the constructed network and the DFA are identical. The desired finite-state network dynamics is achieved by programming a small subset of all weights. A worst case analysis reveals a relationship between the weight strength and the maximum allowed network size, which guarantees finite-state behavior of the constructed network. We illustrate the method by encoding random DFAs with 10, 100, and 1000 states. While the theory predicts that the weight strength scales with the DFA size, we find empirically the weight strength to be almost constant for all the random DFAs. These results can be explained by noting that the generated DFAs represent average cases. We empirically demonstrate the existence of extreme DFAs for which the weight strength scales with DFA size.


international symposium on neural networks | 1996

Representation of fuzzy finite state automata in continuous recurrent, neural networks

Christian W. Omlin; K.K. Thornber; C.L. Giles

Based on previous work on encoding deterministic finite-state automata (DFA) in discrete-time, second-order recurrent neural networks with sigmoidal discriminant functions, we propose an algorithm that constructs an augmented recurrent neural network that encodes fuzzy finite-state automata (FFA). Given an arbitrary FFA, we apply an algorithm which transforms the FFA into an equivalent deterministic acceptor which computes the fuzzy string membership function. The neural network can be constructed such that it recognizes strings of fuzzy regular languages with arbitrary accuracy.


IEEE Transactions on Fuzzy Systems | 1998

Fuzzy finite-state automata can be deterministically encoded into recurrent neural networks

Christian W. Omlin; Karvel K. Thornber; Clyde Lee Giles


Archive | 1994

Extraction and insertion of symbolic information in recurrent neural networks

Christian W. Omlin; C. Lee Giles


Archive | 1995

Deterministic encoding of fuzzy finite state automata in continuous recurrent neural networks

C. Lee Giles; Christian W. Omlin; Karvel K. Thornber

Collaboration


Dive into the Christian W. Omlin's collaboration.

Top Co-Authors

Avatar

C. Lee Giles

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge