Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geoffrey G. Towell is active.

Publication


Featured researches published by Geoffrey G. Towell.


Artificial Intelligence | 1994

Knowledge-based artificial neural networks

Geoffrey G. Towell; Jude W. Shavlik

Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.


Machine Learning | 1993

Extracting Refined Rules from Knowledge-Based Neural Networks

Geoffrey G. Towell; Jude W. Shavlik

Neural networks, despite their empirically proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge must be inserted into a neural network. Second, the network must be refined. Third, the refined knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this article, we propose and empirically evaluate a method for the final, and possibly most difficult, step. Our method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules 1) closely reproduce the accuracy of the network from which they are extracted; 2) are superior to the rules produced by methods that directly refine symbolic rules; 3) are superior to those produced by previous techniques for extracting rules from trained neural networks; and 4) are “human comprehensible.” Thus, this method demonstrates that neural networks can be used to effectively refine symbolic knowledge. Moreover, the rule-extraction technique developed herein contributes to the understanding of how symbolic and connectionist approaches to artificial intelligence can be profitably integrated.


Readings in knowledge acquisition and learning | 1993

Symbolic and neural learning algorithms: an experimental comparison

Jude W. Shavlik; Raymond J. Mooney; Geoffrey G. Towell

Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perceptron and backpropagation neural learning algorithms have been performed using five large, real-world data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a “distributed” output encoding.


Machine Learning | 1991

Symbolic and Neural Learning Algorithms: An Experimental Comparison

Jude W. Shavlik; Raymond J. Mooney; Geoffrey G. Towell

Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, real-world data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a “distributed” output encoding.


Connection Science | 1989

An Approach to Combining Explanation-based and Neural Learning Algorithms

Jude W. Shavlik; Geoffrey G. Towell

Abstract Machine learning is an area where both symbolic and neural approaches to artificial intelligence have been heavily investigated. However, there has been little research into the synergies achievable by combining these two learning paradigms. A hybrid system that combines the symbolically-oriented explanation-based learning paradigm with the neural backpropagation algorithm is described. In the presented EBL-ANN algorithm, the initial neural network configuration is determined by the generalized explanation of the solution to a specific classification task. This approach overcomes problems that arise when using imperfect theories to build explanations and addresses the problem of choosing a good initial neural network configuration. Empirical results show that the hybrid system more accurately learns a concept than the explanation-based system by itself and learns faster and generalizes better than the neural learning system by itself.


international conference on machine learning | 1989

PROCESSING ISSUES IN COMPARISONS OF SYMBOLIC AND CONNECTIONIST LEARNING SYSTEMS

Douglas H. Fisher; Kathleen B. McKusick; Raymond J. Mooney; Jude W. Shavlik; Geoffrey G. Towell

Symbolic and connectionist learning strategies are receiving much attention. Comparative studies should qualify the advantages of systems from each paradigm. However, these systems make differing assumptions along several dimensions, thus complicating the design of ‘fair’ experimental comparisons. This paper describes our comparative studies of ID3 and back-propagation and suggests experimental dimensions that may be useful in cross-paradigm experimental design.


international conference on machine learning | 1989

Combining explanation-based learning and artificial neural networks

Jude W. Shavlik; Geoffrey G. Towell

Explanation-based and neural learning algorithms each have their own strengths and weaknesses. An approach for combining these two styles of learning is presented. Rather than being arbitrarily chosen, initial neural network configurations are determined by the generalized explanation of the solutions to specific tasks. If these generalized explanations are not completely correct, the neural network refines them. Empirical tests with an initial implementation of the approach demonstrate that the hybrid out-performs each of the individual algorithms alone.


Proceedings of the 2nd International Conference | 1993

Using knowledge-based neural networks to refine existing biological theories

Jude W. Shavlik; Geoffrey G. Towell; Michiel O. Noordewier

Artificial neural networks have proven to be a useful technique for analyzing biological data and automatically producing accurate pattern recognizers. However, most applications of neural networks have not taken advantage of existing knowledge about the task at hand. This paper presents a method for using such problem-specific knowledge. The KBANN algorithm uses inference rules about the current biological problem, which need only be approximately correct, to initially configure a neural network. This network is then refined by analyzing sample examples and counter examples of the concept being learned. An application of KBANN to the prediction of E. coli transcriptional promoters demonstrates its superiority to alternative techniques, taken from both the machine-learning and molecular biology literatures. In addition, since KBANN uses a human comprehensible ``theory`` of the current problem to define the initial neural network topology, it is possible to extract a refined set of inference rules following training. A refined theory for promoter recognition is presented; the extracted rules are roughly as accurate on novel data as the trained neural network from which they came.


Neural Networks for Perception#R##N#Human and Machine Perception | 1992

Hybrid Symbolic-Neural Methods for Improved Recognition Using High-Level Visual Features

Geoffrey G. Towell; Jude W. Shavlik

Publisher Summary This chapter discusses hybrid symbolic-neural methods for improved recognition using high-level visual features. Knowledge-based artificial neural networks (KBANN) is a hybrid learning system, making use of both symbolic and neural learning techniques. The approach taken by KBANN is to create knowledge-based neural networks (KNNs) that initially encode in their units and weights, the information contained in a set of symbolic rules. KNNs are made by establishing a mapping between domain theories composed of hierarchical sets of non-recursive, propositional rules and feed forward neural networks. This mapping defines the topology of the KNN as well as its initial link weights. By defining KNNs in this way, problems such as the choice of an initial network topology and the sensitivity of the network to its initial conditions are either eliminated or significantly reduced. Furthermore, unlike ANNs, KNNs have their attention initially focused upon features and combinations of features. Thus, they are less susceptible to spurious correlations in the training data. The experimental results indicate that KBANN can be used to improve roughly-correct information about the recognition of objects given high-level features. The results further indicate that while this roughly-correct information can result in initial performance that is worse than random guessing, its use as a basis for learning can significantly improve generalization by trained networks while reducing the time required for training. No claims about neurological plausibility are made, but it is expected that the approach characterized by KBANN will prove useful in interpreting the information derived by low-level vision systems.


national conference on artificial intelligence | 1990

Refinement of approximate domain theories by knowledge-based neural networks

Geoffrey G. Towell; Jude W. Shavlik; Michiel O. Noordewier

Collaboration


Dive into the Geoffrey G. Towell's collaboration.

Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raymond J. Mooney

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Gove

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Mark Craven

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge