Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George L. Rudolph is active.

Publication


Featured researches published by George L. Rudolph.


international conference on pattern recognition | 1994

A VLSI implementation of a parallel, self-organizing learning model

Matthew G. Stout; Linton G. Salmon; George L. Rudolph; Tony R. Martinez

This paper presents a VLSI implementation of the priority adaptive self-organizing concurrent system (PASOCS) learning model that is built using a multichip module (MCM) substrate. Many current hardware implementations of neural network learning models are direct implementations of classical neural network structures-a large number of simple computing nodes connected by a dense number of weighted links. PASOCS is one of a class of ASOCS (adaptive self-organizing concurrent system) connectionist models whose overall goal is the same as classical neural networks models, but whose functional mechanisms differ significantly. This model has potential application in areas such as pattern recognition, robotics, logical inference, and dynamic control.


ieee multi chip module conference | 1994

A multi-chip module implementation of a neural network

Matthew G. Stout; Linton G. Salmon; George L. Rudolph; Tony R. Martinez

The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models.<<ETX>>


International Journal on Artificial Intelligence Tools | 2014

Finding the Real Differences Between Learning Algorithms

George L. Rudolph; Tony R. Martinez

In the process of selecting a machine learning algorithm to solve a problem, questions like the following commonly arise: (1) Are some algorithms basically the same, or are they fundamentally different? (2) How different? (3) How do we measure that difference? (4) If we want to combine algorithms, what algorithms and combinators should be tried? This research proposes COD (Classifier Output Difference) distance as a diversity metric. COD separates difference from accuracy, COD goes beyond accuracy to consider differences in output behavior as the basis for comparison. The paper extends earlier on COD by giving a basic comparison to other diversity metrics, and by giving an example of using COD data as a predictive model from which to select algorithms to include in an ensemble. COD may fill a niche in metalearning as a predictive aid to selecting algorithms for ensembles and hybrid systems by providing a simple, straightforward, computationally reasonable alternative to other approaches.


computational intelligence | 2012

AUTOMATIC ALGORITHM DEVELOPMENT USING NEW REINFORCEMENT PROGRAMMING TECHNIQUES: Reinforcement Programming

Spencer K. White; Tony R. Martinez; George L. Rudolph

Reinforcement Programming (RP) is a new approach to automatically generating algorithms that uses reinforcement learning techniques. This paper introduces the RP approach and demonstrates its use to generate a generalized, in‐place, iterative sort algorithm. The RP approach improves on earlier results that use genetic programming (GP). The resulting algorithm is a novel algorithm that is more efficient than comparable sorting routines. RP learns the sort in fewer iterations than GP and with fewer resources. Experiments establish interesting empirical bounds on learning the sort algorithm: A list of size 4 is sufficient to learn the generalized sort algorithm. The training set only requires one element and learning took less than 200,000 iterations. Additionally RP was used to generate three binary addition algorithms: a full adder, a binary incrementer, and a binary adder.


computational intelligence | 2012

R einforcement P rogramming

Spencer K. White; Tony R. Martinez; George L. Rudolph

Reinforcement Programming (RP) is a new approach to automatically generating algorithms that uses reinforcement learning techniques. This paper introduces the RP approach and demonstrates its use to generate a generalized, in‐place, iterative sort algorithm. The RP approach improves on earlier results that use genetic programming (GP). The resulting algorithm is a novel algorithm that is more efficient than comparable sorting routines. RP learns the sort in fewer iterations than GP and with fewer resources. Experiments establish interesting empirical bounds on learning the sort algorithm: A list of size 4 is sufficient to learn the generalized sort algorithm. The training set only requires one element and learning took less than 200,000 iterations. Additionally RP was used to generate three binary addition algorithms: a full adder, a binary incrementer, and a binary adder.


computational intelligence | 2012

AUTOMATIC ALGORITHM DEVELOPMENT USING NEW REINFORCEMENT PROGRAMMING TECHNIQUES

Spencer K. White; Tony R. Martinez; George L. Rudolph

Reinforcement Programming (RP) is a new approach to automatically generating algorithms that uses reinforcement learning techniques. This paper introduces the RP approach and demonstrates its use to generate a generalized, in‐place, iterative sort algorithm. The RP approach improves on earlier results that use genetic programming (GP). The resulting algorithm is a novel algorithm that is more efficient than comparable sorting routines. RP learns the sort in fewer iterations than GP and with fewer resources. Experiments establish interesting empirical bounds on learning the sort algorithm: A list of size 4 is sufficient to learn the generalized sort algorithm. The training set only requires one element and learning took less than 200,000 iterations. Additionally RP was used to generate three binary addition algorithms: a full adder, a binary incrementer, and a binary adder.


international symposium on neural networks | 2011

On the structure of algorithm spaces

Adam H. Peterson; Tony R. Martinez; George L. Rudolph

Many learning algorithms have been developed to solve various problems. Machine learning practitioners must use their knowledge of the merits of the algorithms they know to decide which to use for each task. This process often raises questions such as: (1) If performance is poor after trying certain algorithms, which should be tried next? (2) Are some learning algorithms the same in terms of actual task classification? (3) Which algorithms are most different from each other? (4) How different? (5) Which algorithms should be tried for a particular problem? This research uses the COD (Classifier Output Difference) distance metric for measuring how similar or different learning algorithms are. The COD quantifies the difference in output behavior between pairs of learning algorithms. We construct a distance matrix from the individual COD values, and use the matrix to show the spectrum of differences among families of learning algorithms. Results show that individual algorithms tend to cluster along family and functional lines. Our focus, however, is on the structure of relationships among algorithm families in the space of algorithms, rather than on individual algorithms. A number of visualizations illustrate these results. The uniform numerical representation of COD data lends itself to human visualization techniques.


congress on evolutionary computation | 2010

Generating a novel sort algorithm using Reinforcement Programming

Spencer K. White; Tony R. Martinez; George L. Rudolph

Reinforcement Programming (RP) is a new approach to automatically generating algorithms, that uses reinforcement learning techniques. This paper describes the RP approach and gives results of experiments using RP to generate a generalized, in-place, iterative sort algorithm. The RP approach improves on earlier results that that use genetic programming (GP). The resulting algorithm is a novel algorithm that is more efficient than comparable sorting routines. RP learns the sort in fewer iterations than GP and with fewer resources. Results establish interesting empirical bounds on learning the sort algorithm: A list of size 4 is sufficient to learn the generalized sort algorithm. The training set only requires one element and learning took less than 200,000 iterations. RP has also been used to generate three binary addition algorithms: a full adder, a binary incrementer, and a binary adder.


acm southeast regional conference | 2010

AD-NEMO: adaptive dynamic network expansion with mobile robots

George L. Rudolph; Shankar M. Banik; William B. Gilbert

Consider a situation in which a mobile wireless user needs connectivity to the Internet. One such situation arises in battlefield or at disaster recovery site where it may not be feasible to set up a fixed network. An alternative solution to this problem is to send out a team of mobile robots to establish and maintain the connection between the user and an established network that is connected to the Internet. This includes the possibility of sending out new robots to join the team as needed, when the user moves beyond the range of the existing team, or if the signal becomes degraded for some reason. In this paper we propose a framework called AD-NEMO (Adaptive Dynamic Network Expansion with Mobile rObots) where the mobile robots will create a dynamic and adaptive ad-hoc network to provide connectivity to the mobile user. We have developed a prototype of our solution with actual hardware to study the feasibility of our solution.


International Journal of Neural Systems | 1996

LIA: a location-independent transformation for ASOCS adaptive algorithm 2.

George L. Rudolph; Tony R. Martinez

Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of short-comings as a result. ANNs that use dynamic topologies have shown the ability to overcome many of these problems. Adaptive Self-Organizing Concurrent Systems (ASOCS) are a class of learning models with inherently dynamic topologies. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing learning models that use dynamic topologies efficiently in parallel hardware. An LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents the Location-Independent ASOCS (LIA) model as an LIT for ASOCS Adaptive Algorithm 2. The description of LIA gives formal definitions for LIA algorithms. Because LIA implements basic ASOCS mechanisms, these definitions provide a formal description of basic ASOCS mechanisms in general, in addition to LIA.

Collaboration


Dive into the George L. Rudolph's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darren A. Narayan

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge