Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary Lupyan is active.

Publication


Featured researches published by Gary Lupyan.


PLOS ONE | 2010

Language structure is partly determined by social structure.

Gary Lupyan; Rick Dale

Background Languages differ greatly both in their syntactic and morphological systems and in the social environments in which they exist. We challenge the view that language grammars are unrelated to social environments in which they are learned and used. Methodology/Principal Findings We conducted a statistical analysis of >2,000 languages using a combination of demographic sources and the World Atlas of Language Structures— a database of structural language properties. We found strong relationships between linguistic factors related to morphological complexity, and demographic/socio-historical factors such as the number of language users, geographic spread, and degree of language contact. The analyses suggest that languages spoken by large groups have simpler inflectional morphology than languages spoken by smaller groups as measured on a variety of factors such as case systems and complexity of conjugations. Additionally, languages spoken by large groups are much more likely to use lexical strategies in place of inflectional morphology to encode evidentiality, negation, aspect, and possession. Our findings indicate that just as biological organisms are shaped by ecological niches, language structures appear to adapt to the environment (niche) in which they are being learned and used. As adults learn a language, features that are difficult for them to acquire, are less likely to be passed on to subsequent learners. Languages used for communication in large groups that include adult learners appear to have been subjected to such selection. Conversely, the morphological complexity common to languages used in small groups increases redundancy which may facilitate language learning by infants. Conclusions/Significance We hypothesize that language structures are subjected to different evolutionary pressures in different social environments. Just as biological organisms are shaped by ecological niches, language structures appear to adapt to the environment (niche) in which they are being learned and used. The proposed Linguistic Niche Hypothesis has implications for answering the broad question of why languages differ in the way they do and makes empirical predictions regarding language acquisition capacities of children versus adults.


Psychological Science | 2007

Language is not Just for Talking Redundant Labels Facilitate Learning of Novel Categories

Gary Lupyan; David H. Rakison; James L. McClelland

In addition to having communicative functions, verbal labels may play a role in shaping concepts. Two experiments assessed whether the presence of labels affected category formation. Subjects learned to categorize “aliens” as those to be approached or those to be avoided. After accuracy feedback on each response was provided, a nonsense label was either presented or not. Providing nonsense category labels facilitated category learning even though the labels were redundant and all subjects had equivalent experience with supervised categorization of the stimuli. A follow-up study investigated differences between learning verbal and nonverbal associations and showed that learning a nonverbal association did not facilitate categorization. The findings show that labels make category distinctions more concrete and bear directly on the language-and-thought debate.


Frontiers in Psychology | 2012

Linguistically modulated perception and cognition: the label-feedback hypothesis

Gary Lupyan

How does language impact cognition and perception? A growing number of studies show that language, and specifically the practice of labeling, can exert extremely rapid and pervasive effects on putatively non-verbal processes such as categorization, visual discrimination, and even simply detecting the presence of a stimulus. Progress on the empirical front, however, has not been accompanied by progress in understanding the mechanisms by which language affects these processes. One puzzle is how effects of language can be both deep, in the sense of affecting even basic visual processes, and yet vulnerable to manipulations such as verbal interference, which can sometimes nullify effects of language. In this paper, I review some of the evidence for effects of language on cognition and perception, showing that performance on tasks that have been presumed to be non-verbal is rapidly modulated by language. I argue that a clearer understanding of the relationship between language and cognition can be achieved by rejecting the distinction between verbal and non-verbal representations and by adopting a framework in which language modulates ongoing cognitive and perceptual processing in a flexible and task-dependent manner.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Language can boost otherwise unseen objects into visual awareness

Gary Lupyan; Emily J. Ward

Linguistic labels (e.g., “chair”) seem to activate visual properties of the objects to which they refer. Here we investigated whether language-based activation of visual representations can affect the ability to simply detect the presence of an object. We used continuous flash suppression to suppress visual awareness of familiar objects while they were continuously presented to one eye. Participants made simple detection decisions, indicating whether they saw any image. Hearing a verbal label before the simple detection task changed performance relative to an uninformative cue baseline. Valid labels improved performance relative to no-label baseline trials. Invalid labels decreased performance. Labels affected both sensitivity (d′) and response times. In addition, we found that the effectiveness of labels varied predictably as a function of the match between the shape of the stimulus and the shape denoted by the label. Together, the findings suggest that facilitated detection of invisible objects due to language occurs at a perceptual rather than semantic locus. We hypothesize that when information associated with verbal labels matches stimulus-driven activity, language can provide a boost to perception, propelling an otherwise invisible image into awareness.


Trends in Cognitive Sciences | 2015

Arbitrariness, iconicity, and systematicity in language

Mark Dingemanse; Damián E. Blasi; Gary Lupyan; Morten H. Christiansen; Padraic Monaghan

The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure.


Cognition | 2008

The Conceptual Grouping Effect: Categories Matter (and Named Categories Matter More).

Gary Lupyan

Do conceptual categories affect basic visual processing? A conceptual grouping effect for familiar stimuli is reported using a visual search paradigm. Search through conceptually-homogeneous non-targets was faster and more efficient than search through conceptually-heterogeneous non-targets. This effect cannot be attributed to perceptual factors and is not explained by a long-term representational reorganization due to perceptual-learning. Rather, conceptual categories seem to modulate visual representations dynamically, and are sensitive to task-demands. Verbally labeling a visual target further exaggerates the degree to which conceptual categories penetrate visual processing.


Journal of Experimental Psychology: General | 2008

From Chair to "Chair": A Representational Shift Account of Object Labeling Effects on Memory

Gary Lupyan

What are the consequences of calling things by their names? Six experiments investigated how classifying familiar objects with basic-level names (chairs, tables, and lamps) affected recognition memory. Memory was found to be worse for items that were overtly classified with the category name--as reflected by lower hit rates--compared with items that were not overtly classified. This effect of labeling on subsequent recall is explained in terms of a representational shift account, with labeling causing a distortion in dimensions most reliably associated with the category label. Consistent with this account, effects of labeling were strongly mediated by typicality and ambiguity of the labeled items, with typical and unambiguous items most affected by labeling. Follow-up experiments showed that this effect cannot be explained solely by differences in initial encoding, further suggesting that labeling a familiar image distorts its encoded representation. This account suggests a possible mechanism for the verbal overshadowing effect (J. W. Schooler & T. Y. Engstler-Schooler, 1990).


Psychological Science | 2010

Conceptual Penetration of Visual Processing

Gary Lupyan; Sharon L. Thompson-Schill; Daniel Swingley

In traditional hierarchical models of information processing, visual representations feed into conceptual systems, but conceptual categories do not exert an influence on visual processing. We provide evidence, across four experiments, that conceptual information can in fact penetrate early visual processing, rather than merely biasing the output of perceptual systems. Participants performed physical-identity judgments on visually equidistant pairs of letter stimuli that were either in the same conceptual category (Bb) or in different categories (Bp). In the case of nonidentical letters, response times were longer when the stimuli were from the same conceptual category, but only when the letters were presented sequentially. The differences in effect size between simultaneous and sequential trials rules out a decision-level account. An additional experiment using animal silhouettes replicated the major effects found with letters. Thus, performance on an explicitly visual task was influenced by conceptual categories. This effect depended on processing time, immediately preceding experience, and stimulus typicality, which suggests that it was produced by the direct influence of category knowledge on perception, rather than by a postperceptual decision bias.


Current Directions in Psychological Science | 2015

Words and the World Predictive Coding and the Language-Perception-Cognition Interface

Gary Lupyan; Andy Clark

Can what we know change what we see? Does language affect cognition and perception? The last few years have seen increased attention to these seemingly disparate questions, but with little theoretical advance. We argue that substantial clarity can be gained by considering these questions through the lens of predictive processing, a framework in which mental representations—from the perceptual to the cognitive—reflect an interplay between downward-flowing predictions and upward-flowing sensory signals. This framework provides a parsimonious account of how (and when) what we know ought to change what we see and helps us understand how a putatively high-level trait such as language can impact putatively low-level processes such as perception. Within this framework, language begins to take on a surprisingly central role in cognition by providing a uniquely focused and flexible means of constructing predictions against which sensory signals can be evaluated. Predictive processing thus provides a plausible mechanism for many of the reported effects of language on perception, thought, and action, and new insights on how and when speakers of different languages construct the same “reality” in alternate ways.


Attention Perception & Psychophysics | 2010

Redundant spoken labels facilitate perception of multiple items

Gary Lupyan; Michael J. Spivey

Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects from the named category. Even though the word cues did not provide additional information to the participants, hearing a label resulted in faster detection of attention probes appearing near the objects denoted by the label. For example, hearing the wordchair resulted in more effective visual processing of all of the chairs in a scene relative to trials in which the participants attended to the chairs without actually hearing the label. This facilitation was mediated by stimulus typicality. Transformations of the stimuli that disrupted their association with the label while preserving the low-level visual features eliminated the facilitative effect of the labels. In the final experiment, we show that hearing a label improves the accuracy of locating multiple items matching the label, even when eye movements are restricted. We posit that verbal labels dynamically modulate visual processing via top-down feedback—an instance of linguistic labels greasing the wheels of perception.

Collaboration


Dive into the Gary Lupyan's collaboration.

Top Co-Authors

Avatar

Rick Dale

University of California

View shared research outputs
Top Co-Authors

Avatar

Marcus Perlman

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Pierce Edmiston

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus Perlman

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

David H. Rakison

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge