Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Falk Huettig is active.

Publication


Featured researches published by Falk Huettig.


Acta Psychologica | 2011

Using the visual world paradigm to study language processing: A review and critical evaluation

Falk Huettig; Joost Rommers; Antje S. Meyer

We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.


Visual Cognition | 2007

Visual-shape competition during language-mediated attention is based on lexical input and not modulated by contextual appropriateness

Falk Huettig; Gerry T. M. Altmann

Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing “snake”, participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word—sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of “snake”, but they did not look at the visually similar cable until hearing “snake”. Finally, we demonstrate that such activation can, under certain circumstances (e.g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness.


Brain Research | 2015

Four central questions about prediction in language processing

Falk Huettig

The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. This article is part of a Special Issue entitled SI: Prediction and Attention.


Quarterly Journal of Experimental Psychology | 2011

Looking at anything that is green when hearing “frog”: How object surface colour and stored object colour knowledge influence language-mediated overt attention

Falk Huettig; Gerry T. M. Altmann

Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.


Journal of the Acoustical Society of America | 2012

Changing only the probability that spoken words will be distorted changes how they are recognized

James M. McQueen; Falk Huettig

An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated on onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.


Language and Cognitive Processes | 2012

Speech reductions change the dynamics of competition during spoken word recognition.

Susanne Brouwer; Holger Mitterer; Falk Huettig

Three eye-tracking experiments investigated how phonological reductions (e.g., “puter” for “computer”) modulate phonological competition. Participants listened to sentences extracted from a spontaneous speech corpus and saw four printed words: a target (e.g., “computer”), a competitor similar to the canonical form (e.g., “companion”), one similar to the reduced form (e.g., “pupil”), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed their attention to a similar degree to both competitors independent of the targets spoken form. In Experiment 2, we excluded reduced forms and presented canonical forms only. In such a listening situation, participants showed a clear preference for the “canonical form” competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listening to reduced speech than when listening to fully articulated speech. We conclude that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system.


Language, cognition and neuroscience | 2015

Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world

Falk Huettig; Esther Janse

ABSTRACT Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed, however, has largely been ignored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32–77 years of age received spoken instructions (e.g. “Kijk naar deCOM afgebeelde pianoCOM”– look at the displayed piano) while viewing 4 objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use article gender information to predict the target. Multiple regression analyses showed that enhanced working memory abilities and faster processing speed predicted anticipatory eye movements. Models of predictive language processing therefore must take mediating factors into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual–spatial representations.


Language, cognition and neuroscience | 2015

Is prediction necessary to understand language? Probably not.

Falk Huettig; Nivedita Mani

ABSTRACT Some recent theoretical accounts in the cognitive sciences suggest that prediction is necessary to understand language. Here we evaluate this proposal. We consider arguments that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from “prediction-encouraging” experimental set-ups. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.


Frontiers in Psychology | 2011

Language-mediated visual orienting behavior in low and high literates.

Falk Huettig; Niharika Singh; Ramesh Kumar Mishra

The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.


Quarterly Journal of Experimental Psychology | 2011

Toddlers' language-mediated visual search: they need not have the words for it.

Elizabeth K. Johnson; James M. McQueen; Falk Huettig

Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.

Collaboration


Dive into the Falk Huettig's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nivedita Mani

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge