Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Smolensky is active.

Publication


Featured researches published by Paul Smolensky.


Behavioral and Brain Sciences | 1993

On the proper treatment of connectionism

Paul Smolensky

A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models.


Artificial Intelligence | 1990

Tensor product variable binding and the representation of symbolic structures in connectionist systems

Paul Smolensky

Abstract A general method, the tensor product representation, is defined for the connectionist representation of value/variable bindings. The technique is a formalization of the idea that a set of value/variable pairs can be represented by accumulating activity in a collection of units each of which computes the product of a feature of a variable and a feature of its value. The method allows the fully distributed representation of bindings and symbolic structures. Fully and partially localized special cases of the tensor product representation reduce to existing cases of connectionist representations of structured data. The representation rests on a principled analysis of structure; it saturates gracefully as larger structures are represented; it permits recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it permits values to also serve as variables; and it enables analysis of the interference of symbolic structures stored in associative memories. It has also served as the basis for working connectionist models of high-level cognitive tasks.


Connection Science | 1989

Using Relevance to Reduce Network Size Automatically

Michael C. Mozer; Paul Smolensky

This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the networks behavior and imp...


Artificial Intelligence Review | 1987

Connectionist AI, Symbolic AI, and the Brain

Paul Smolensky

Connectionist AI systems are large networks of extremely simple numerical processors, massively interconnected and running in parallel. There has been great progress in the connectionist approach, and while it is still unclear whether the approach will succeed, it is also unclear exactly what the implications for cognitive science would be if it did succeed. In this paper I present a view of the connectionist approach that implies that the level of analysis at which uniform formal principles of cognition can be found is the subsymbolic level, intermediate between the neural and symbolic levels. Notions such as logical inference, sequential firing of production rules, spreading activation between conceptual units, mental categories, and frames or schemata turn out to provide approximate descriptions of the coarse-grained behaviour of connectionist systems. The implication is that symbol-level structures provide only approximate accounts of cognition, useful for description but not necessarily for constructing detailed formal models.


Archive | 1998

When is less more? Faithfulness and minimal links in wh-chains

Géraldine Legendre; Paul Smolensky; Colin Wilson

at Colorado; to Joseph Aoun, Luigi Burzio, Robert Frank, Jane Grimshaw, David Pesetsky and Alan Prince, for extremely helpful conversations; to audiences at Arizona, Brown, Cornell, Delaware, Georgetown, Hopkins, Maryland, UCLA, USC, and the MIT conference, for stimulating ideas and questions. For partial financial support, we gratefully acknowledge NSF grant BS-9209265, and then NSF grant IRI-9213894, both to Smolensky and Legendre; and the Center for Language and Speech Processing at Johns Hopkins.


Archive | 1995

The Learnability of Optimality Theory: An Algorithm and Some Basic Complexity Results

Bruce Tesar; Paul Smolensky

If Optimality Theory (Prince & Smolensky 1991, 1993) is correct, Universal Grammar provides a set of universal constraints which are highly general, inherently conflicting, and consequently rampantly violated in the surface forms of languages. A language’s grammar ranks the universal constraints in a dominance hierarchy, higher-ranked constraints taking absolute priority over lower-ranked constraints, so that violations of a constraint occur in well-formed structures when, and only when, they are necessary to prevent violation of higher-ranked constraints. Languages differ principally in how they rank the universal constraints in their language-specific dominance hierarchies. The surface forms of a given language are structural descriptions of inputs which are optimal in the following sense: they satisfy the universal constraints, or, when these constraints are brought into conflict by an input, they satisfy the highest-ranked constraints possible. This notion of optimality is partly language-specific, since the ranking of constraints is language-particular, and partly universal, since the constraints which evaluate well-formedness are (at least to a considerable extent) universal. In many respects, ranking of universal constraints in Optimality Theory plays a role analogous to parameter-setting in principles-and-parameters theory. Evidence in favor of this Optimality-Theoretic characterization of Universal Grammar is provided elsewhere; most work to date addresses phonology: see Prince & Smolensky 1993 (henceforth, ‘P&S’) and the several dozen works cited therein, notably McCarthy & Prince 1993; initial work addressing syntax includes Grimshaw 1993 and Legendre, Raymond & Smolensky 1993. Here, we investigate the learnability of grammars in Optimality Theory. Under the assumption of innate knowledge of the universal constraints, the primary task of the learner is the determination of the dominance ranking of these constraints which is particular to the target language. We will present a simple and efficient algorithm for solving this problem, assuming a given set of hypothesized underlying forms. (Concerning the problem of acquiring underlying forms, see the discussion of ‘optimality in the lexicon’ in P & S 1993:§9). The fact that surface forms are optimal means that every positive example entails a great number of implicit negative examples: for a given input, every candidate output other than the correct form is ill-formed.1 As a consequence, even a single positive example can greatly constrain the possible grammars for a target language, as we will see explicitly. In §1 we present the relevant principles of Optimality Theory and discuss the special nature of the learning problem in that theory. Readers familiar with the theory may wish to proceed directly to §1.3. In §2 we present the first version of our learning algorithm, initially, through a concrete example; we also consider its (low) computational complexity. Formal specification of the first version of the algorithm and proof of its correctness are taken up in the Appendix. In §3 we generalize the algorithm, identifying a more general core called Constraint Demotion(‘CD’) and then a family of CD algorithms which differ in how they apply this core to the acquisition data. We sketch a proof of the correctness and convergence of the CD algorithms, and of a bound on the number of examples needed to complete learning. In §4 we briefly consider the issue of ties in the ranking of constraints and the case of inconsistent data. Finally, we observe that the CD algorithm entails a Superset Principle for acquisition: as the learner refines the grammar, the set of well-formed structures shrinks.


Archive | 2002

Typological Consequences of Local Constraint Conjunction

Elliott Moreton; Paul Smolensky

Local conjunction is a mechanism in Optimality Theory for constructing complex constraints from simpler ones (Green 1993, Smolensky 1993). If C1 and C2 are constraints, and D is a representational domain type (e.g. segment, cluster, syllable, stem), then (C1 & C2)D, the local conjunction of C1 and C2 in D, is a constraint which is violated whenever there is a domain of type D in which both C1 and C2 are violated. It is used in situations where violations of C1 alone or of C2 alone do not eliminate a candidate, but violations of both constraints simultaneously do. A good illustration is the coda condition for German Final-Obstruent Devoicing, in which underlyingly voiced obstruents become voiceless in syllable codas (Ito & Mester in press):


Cognitive Science | 2014

Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition

Paul Smolensky; Matthew Goldrick; Donald Mathis

Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance.


Archive | 1995

Optimality and Wh-Extraction

Géraldine Legendre; Colin Wilson; Paul Smolensky; Kristin Homer; William Raymond

The study of wh-question formation has historically served as the empirical basis for major constructs in Government-Binding (GB) such as the Empty Category Principle (ECP), the existence of Logical Form (LF) as a separate level of representation— motivated in part by the abstract wh-movement at LF analysis of wh-in-situ in languages like Chinese (Huang, 1982)—and the central but controversial issue of which principles apply at which levels of representation. For example, Huang (1982) argues, based on Chinese, that the ECP applies at S-structure and LF while subjacency and his Condition on Extraction Domain (CED) apply only at S-structure.


Cognitive Science | 2006

Harmony in Linguistic Cognition

Paul Smolensky

In this article, I survey the integrated connectionist/symbolic (ICS) cognitive architecture in which higher cognition must be formally characterized on two levels of description. At the microlevel, parallel distributed processing (PDP) characterizes mental processing; this PDP system has special organization in virtue of which it can be characterized at the macrolevel as a kind of symbolic computational system. The symbolic system inherits certain properties from its PDP substrate; the symbolic functions computed constitute optimization of a well-formedness measure called Harmony. The most important outgrowth of the ICS research program is optimality theory (Prince & Smolensky, 1993/2004), an optimization-based grammatical theory that provides a formal theory of cross-linguistic typology. Linguistically, Harmony maximization corresponds to minimization of markedness or structural ill-formedness. Cognitive explanation in ICS requires the collaboration of symbolic and connectionist principles. ICS is developed in detail in Smolensky and Legendre (2006a); this article is a précis of and guide to those volumes.

Collaboration


Dive into the Paul Smolensky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshiro Miyata

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Michael C. Mozer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin Wilson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clayton McMillan

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge