Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jordan B. Pollack is active.

Publication


Featured researches published by Jordan B. Pollack.


IEEE Transactions on Neural Networks | 1994

An evolutionary algorithm that constructs recurrent neural networks

Peter J. Angeline; Gregory M. Saunders; Jordan B. Pollack

Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARLs empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.


Artificial Intelligence | 1990

Recursive distributed representations

Jordan B. Pollack

Abstract A longstanding difficulty for connectionist modeling has been how to represent variable-sized recursive data structures, such as trees and lists, in fixed-width patterns. This paper presents a connectionist architecture which automatically develops compact distributed representations for such compositional structures, as well as efficient accessing mechanisms for them. Patterns which stand for the internal nodes of fixed-valence trees are devised through the recursive use of backpropagation on three-layer auto-associative encoder networks. The resulting representations are novel, in that they combine apparently immiscible aspects of features, pointers, and symbol structures. They form a bridge between the data structures necessary for high-level cognitive tasks and the associative, pattern recognition machinery provided by neural networks.


Nature | 2000

Automatic design and manufacture of robotic lifeforms.

Hod Lipson; Jordan B. Pollack

Biological life is in control of its own means of reproduction, which generally involves complex, autocatalysing chemical reactions. But this autonomy of design and manufacture has not yet been realized artificially. Robots are still laboriously designed and constructed by teams of human engineers, usually at considerable expense. Few robots are available because these costs must be absorbed through mass production, which is justified only for toys, weapons and industrial systems such as automatic teller machines. Here we report the results of a combined computational and experimental approach in which simple electromechanical systems are evolved through simulations from basic building blocks (bars, actuators and artificial neurons); the ‘fittest’ machines (defined by their locomotive ability) are then fabricated robotically using rapid manufacturing technology. We thus achieve autonomy of design and construction using evolution in a ‘limited universe’ physical simulation coupled to automatic fabrication.


Cognitive Science | 1985

Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation*

David L. Waltz; Jordan B. Pollack

This is a description of research in developing a natural language processing system with modular knowledge sources but strongly interactive processing. The system offers insights into a variety of linguistic phenomena and allows easy testing of a variety of hypotheses. Language interpretation takes place on a activation network which is dynamically created from input, recent context, and long-term knowledge. Initially ambiguous and unstable, the network settles on a single interpretation, using a parallel, analog relaxation process. We also describe a parallel model for the representation of context and of the priming of concepts. Examples illustrating contextual influence on meaning interpretation and “semantic garden path” sentence processing, among other issues, are included.


Machine Learning | 1991

The Induction of Dynamical Recognizers

Jordan B. Pollack

A higher order recurrent neural network architecture learns to recognize and generate languages after being “trained” on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment canses a “bifurcation” in the limit behavior of the network. This phase transition corresponds to the onset of the networks capacity for generalizing to arbitrary-length strings. Second, a study of the automata resulting from the acquisit on of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton consistent with the given exemplars, which is an NP-Hard problem, the architecture does appear capable of generating non regular languages by exploiting fractal and chaotic dynamics. I end the paper with a hypothesis relating linguistic generative capacity to the behavioral regimes of non-linear dynamical systems.


Robotics and Autonomous Systems | 2002

Embodied Evolution: Distributing an evolutionary algorithm in a population of robots

Richard A. Watson; Sevan G. Ficici; Jordan B. Pollack

We introduce Embodied Evolution (EE) as a new methodology for evolutionary robotics (ER). EE uses a population of physical robots that autonomously reproduce with one another while situated in their task environment. This constitutes a fully distributed evolutionary algorithm embodied in physical robots. Several issues identified by researchers in the evolutionary robotics community as problematic for the development of ER are alleviated by the use of a large number of robots being evaluated in parallel. Particularly, EE avoids the pitfalls of the simulate-and-transfer method and allows the speed-up of evaluation time by utilizing parallelism. The more novel features of EE are that the evolutionary algorithm is entirely decentralized, which makes it inherently scalable to large numbers of robots, and that it uses many robots in a shared task environment, which makes it an interesting platform for future work in collective robotics and Artificial Life. We have built a population of eight robots and successfully implemented the first example of Embodied Evolution by designing a fully decentralized, asynchronous evolutionary algorithm. Controllers evolved by EE outperform a hand-designed controller in a simple application. We introduce our approach and its motivations, detail our implementation and initial results, and discuss the advantages and limitations of EE.


Artificial Life | 2002

Creating high-level components with a generative representation for body-brain evolution

Gregory S. Hornby; Jordan B. Pollack

One of the main limitations of scalability in body-brain evolution systems is the representation chosen for encoding creatures. This paper defines a class of representations called generative representations, which are identified by their ability to reuse elements of the genotype in the translation to the phenotype. This paper presents an example of a generative representation for the concurrent evolution of the morphology and neural controller of simulated robots, and also introduces GENRE, an evolutionary system for evolving designs using this representation. Applying GENRE to the task of evolving robots for locomotion and comparing it against a non-generative (direct) representation shows that the generative representation system rapidly produces robots with significantly greater fitness. Analyzing these results shows that the generative representation system achieves better performance by capturing useful bias from the design space and by allowing viable large scale mutations in the phenotype. Generative representations thereby enable the encapsulation, coordination, and reuse of assemblies of parts.


Machine Learning | 1998

Co-Evolution in the Successful Learning of Backgammon Strategy

Jordan B. Pollack; Alan D. Blair

Following Tesauros work on TD-Gammon, we used a 4,000 parameter feedforward neural network to develop a competitive backgammon evaluation function. Play proceeds by a roll of the dice, application of the network to all legal moves, and selection of the position with the highest evaluation. However, no backpropagation, reinforcement or temporal difference learning methods were employed. Instead we apply simple hillclimbing in a relative fitness environment. We start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights if the challenger wins. Surprisingly, this worked rather well. We investigate how the peculiar dynamics of this domain enabled a previously discarded weak method to succeed, by preventing suboptimal equilibria in a “meta-game” of self-learning.


Mind as motion | 1996

The induction of dynamical recognizers

Jordan B. Pollack

A higher order recurrent neural network architecture learns to recognize and generate languages after being “trained” on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a “bifurcation” in the limit behavior of the network. This phase transition corresponds to the onset of the networks capacity for generalizing to arbitrary-length strings. Second, a study of the automata resulting from the acquisition of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton consistent with the given exemplars, which is an NP-Hard problem, the architecture does appear capable of generating non-regular languages by exploiting fractal and chaotic dynamics. I end the paper with a hypothesis relating linguistic generative capacity to the behavioral regimes of non-linear dynamical systems.


parallel problem solving from nature | 1998

Modeling Building-Block Interdependency

Richard A. Watson; Gregory S. Hornby; Jordan B. Pollack

The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA lest problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various lest problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-1FF) as a canonical example. We present some empirical results of GAs on H-1FF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests.

Collaboration


Dive into the Jordan B. Pollack's collaboration.

Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Pattie Maes

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge