Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon D. Levy is active.

Publication


Featured researches published by Simon D. Levy.


international symposium on neural networks | 2000

RAAM for infinite context-free languages

Ofer Melnik; Simon D. Levy; Jordan B. Pollack

With its ability to represent variable sized trees in fixed width patterns, recursive auto-associative memory (RAAM) is a bridge between connectionist and symbolic systems. In the past, due to limitations in our understanding, its development plateaued. By examining RAAM from a dynamical systems perspective we overcome most of the problems that previously plagued it. In fact, using a dynamical systems analysis we can now prove that not only is RAAM capable of generating parts of a context free language (a/sup n/b/sup n/) but is capable of expressing the whole language.


artificial general intelligence | 2014

Bracketing the Beetle: How Wittgenstein’s Understanding of Language Can Guide Our Practice in AGI and Cognitive Science

Simon D. Levy; Charles Lowney; William Meroney; Ross W. Gayler

We advocate for a novel connectionist modeling framework as an answer to a set of challenges to AGI and cognitive science put forth by classical formal systems approaches. We show how this framework, which we call Vector Symbolic Architectures, or VSAs, is also the kind of model of mental activity that we arrive at by taking Ludwig Wittgenstein’s critiques of the philosophy of mind and language seriously. We conclude by describing how VSA and related architectures provide a compelling solution to three central problems raised by Wittgenstein in the Philosophical Investigations regarding rule-following, aspect-seeing, and the development of a “private” language.


biologically inspired cognitive architectures | 2010

Explanatory Aspirations and the Scandal of Cognitive Neuroscience

Ross W. Gayler; Simon D. Levy; Rens Bod

In this position paper we argue that BICA must simultaneously be compatible with the explanation of human cognition and support the human design of artificial cognitive systems. Most cognitive neuroscience models fail to provide a basis for implementation because they neglect necessary levels of functional organisation in jumping directly from physical phenomena to cognitive behaviour. Of those models that do attempt to include the intervening levels, most either fail to implement the required cognitive functionality or do not scale adequately. We argue that these problems of functionality and scaling arise because of identifying computational entities with physical resources such as neurons and synapses. This issue can be avoided by introducing appropriate virtual machines. We propose a tool stack that introduces such virtual machines and supports design of cognitive architectures by simplifying the design task through vertical modularity.


EELC'06 Proceedings of the Third international conference on Emergence and Evolution of Linguistic Communication: symbol Grounding and Beyond | 2006

Evolving distributed representations for language with self-organizing maps

Simon D. Levy; Simon Kirby

We present a neural-competitive learning model of language evolution in which several symbol sequences compete to signify a given propositional meaning. Both symbol sequences and propositional meanings are represented by high-dimensional vectors of real numbers. A neural network learns to map between the distributed representations of the symbol sequences and the distributed representations of the propositions. Unlike previous neural network models of language evolution, our model uses a Kohonen Self-Organizing Map with unsupervised learning, thereby avoiding the computational slowdown and biological implausibility of back-propagation networks and the lack of scalability associated with Hebbian-learning networks. After several evolutionary generations, the network develops systematically regular mappings between meanings and sequences, of the sort traditionally associated with symbolic grammars. Because of the potential of neural-like representations for addressing the symbol-grounding problem, this sort of model holds a good deal of promise as a new explanatory mechanism for both language evolution and acquisition.


international symposium on neural networks | 2001

Logical computation on a fractal neural substrate

Simon D. Levy; Jordan B. Pollack

Attempts to use neural networks to model recursive symbolic processes like logic have met with some success, but have faced serious hurdles caused by the limitations of standard connectionist coding schemes. As a contribution to this effort, this paper presents recent work in infinite recursive auto-associative memory, a new connectionist unification model based on a fusion of recurrent neural networks with fractal geometry. Using a logical programming language as our modeling domain, we show how this approach solves many of the problems faced by earlier connectionist models, supporting arbitrarily large sets of logical expressions.


Connection Science | 2011

Compositional connectionism in cognitive science II: the localist/distributed dimension

Ross W. Gayler; Simon D. Levy

In October 2004, approximately 30 connectionist and nonconnectionist researchers gathered at a AAAI symposium to discuss and debate a topic of central concern in artificial intelligence and cognitive science: the nature of compositionality. The symposium offered participants an opportunity to confront the persistent belief among traditional cognitive scientists that connectionist models are, in principle, incapable of systematically composing and manipulating the elements of mental structure (words, concept, semantic roles, etc.). Participants met this challenge, with several connectionist models serving as proofs of concept that connectionism is indeed capable of building and manipulating compositional cognitive representations (Levy and Gayler 2004).


Ai Magazine | 2005

Reports on the 2004 AAAI fall symposia

Nicholas L. Cassimatis; Sean Luke; Simon D. Levy; Ross W. Gayler; Pentti Kanerva; Chris Eliasmith; Timothy W. Bickmore; Alan C. Schultz; Randall Davis; James A. Landay; Robert C. Miller; Eric Saund; Thomas F. Stahovich; Michael L. Littman; Satinder P. Singh; Shlomo Argamon; Shlomo Dubnov

The American Association for Artificial Intelligence was pleased to present the AAAI 2006 Fall Symposium Series, held Friday through Sunday, October 13-15, at the Hyatt Regency Crystal City in Washington, DC. Seven symposia were held. The titles were (1) Aurally Informed Performance: Integrating Ma- chine Listening and Auditory Presentation in Robotic Systems; (2) Capturing and Using Patterns for Evidence Detection; (3) Developmental Systems; (4) Integrating Reasoning into Everyday Applications; (5) Interaction and Emergent Phenomena in Societies of Agents; (6) Semantic Web for Collaborative Knowledge Acquisition; and (7) Spacecraft Autonomy: Using AI to Expand Human Space Exploration.


artificial general intelligence | 2008

Vector Symbolic Architectures: A New Building Material for Artificial General Intelligence

Simon D. Levy; Ross W. Gayler


Archive | 2004

Compositional Connectionism in Cognitive Science

Simon D. Levy; Ross W. Gayler


national conference on artificial intelligence | 2013

Learning behavior hierarchies via high-dimensional sensor projection

Simon D. Levy; Suraj Bajracharya; Ross W. Gayler

Collaboration


Dive into the Simon D. Levy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Adam Overholtzer

Washington and Lee University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charles Lowney

Washington and Lee University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth E. Davis

Washington and Lee University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge