Developmental science | 2021

Flexible fast-mapping: Deaf children dynamically allocate visual attention to learn novel words in American Sign Language.

 
 
 

Abstract


Word learning in young children requires coordinated attention between language input and the referent object. Current accounts of word learning are based on spoken language, where the association between language and objects occurs through simultaneous and multimodal perception. In contrast, deaf children acquiring American Sign Language (ASL) perceive both linguistic and non-linguistic information through the visual mode. In order to coordinate attention to language input and its referents, deaf children must allocate visual attention optimally between objects and signs. We conducted two eye-tracking experiments to investigate how young deaf children allocate attention and process referential cues in order to fast-map novel signs to novel objects. Participants were deaf children learning ASL between the ages of 17-71 months. In Experiment 1, participants (n = 30) were presented with a novel object and a novel sign, along with a referential cue that occurred either before or after the sign label. In Experiment 2, a new group of participants (n = 32) were presented with two novel objects and a novel sign, so that the referential cue was critical for identifying the target object. Across both experiments, participants showed evidence for fast-mapping the signs regardless of the timing of the referential cue. Individual differences in children s allocation of attention during exposure were correlated with their ability to fast-map the novel signs at test. This study provides first evidence for fast-mapping in sign language, and contributes to theoretical accounts of how word learning develops when all input occurs in the visual modality. This article is protected by copyright. All rights reserved.

Volume None
Pages None
DOI 10.1111/desc.13166
Language English
Journal Developmental science

Full Text