Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Casey Lew-Williams is active.

Publication


Featured researches published by Casey Lew-Williams.


Psychological Science | 2007

Young Children Learning Spanish Make Rapid Use of Grammatical Gender in Spoken Word Recognition

Casey Lew-Williams; Anne Fernald

All nouns in Spanish have grammatical gender, with obligatory gender marking on preceding articles (e.g., la and el, the feminine and masculine forms of “the,” respectively). Adult native speakers of languages with grammatical gender exploit this cue in on-line sentence interpretation. In a study investigating the early development of this ability, Spanish-learning children (34–42 months) were tested in an eye-tracking procedure. Presented with pairs of pictures with names of either the same grammatical gender (la pelota, “ball [feminine]”; la galleta, “cookie [feminine]”) or different grammatical gender (la pelota; el zapato, “shoe [masculine]”), they heard sentences referring to one picture (Encuentra la pelota, “Find the ball”). The children were faster to orient to the referent on different-gender trials, when the article was potentially informative, than on same-gender trials, when it was not, and this ability was correlated with productive measures of lexical and grammatical competence. Spanish-learning children who can speak only 500 words already use gender-marked articles in establishing reference, a processing advantage characteristic of native Spanish-speaking adults.


Second Language Research | 2012

Grammatical Gender in L2: A Production or a Real-Time Processing Problem?.

Theres Grüter; Casey Lew-Williams; Anne Fernald

Mastery of grammatical gender is difficult to achieve in a second language (L2). This study investigates whether persistent difficulty with grammatical gender often observed in the speech of otherwise highly proficient L2 learners is best characterized as a production-specific performance problem, or as difficulty with the retrieval of gender information in real-time language use. In an experimental design that crossed production/comprehension and online/offline tasks, highly proficient L2 learners of Spanish performed at ceiling in offline comprehension, showed errors in elicited production, and exhibited weaker use of gender cues in online processing of familiar (though not novel) nouns than native speakers. These findings suggest that persistent difficulty with grammatical gender may not be limited to the realm of language production, but could affect both expressive and receptive use of language in real time. We propose that the observed differences in performance between native and non-native speakers lie at the level of lexical representation of grammatical gender and arise from fundamental differences in how infants and adults approach word learning.


Scientific Reports | 2016

Communicative signals support abstract rule learning by 7-month-old infants

Brock Ferguson; Casey Lew-Williams

The mechanisms underlying the discovery of abstract rules like those found in natural language may be evolutionarily tuned to speech, according to previous research. When infants hear speech sounds, they can learn rules that govern their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do so. Here we show that infants’ rule learning is not tied to speech per se, but is instead enhanced more broadly by communicative signals. In two experiments, infants succeeded in learning and generalizing rules from tones that were introduced as if they could be used to communicate. In two control experiments, infants failed to learn the very same rules when familiarized to tones outside of a communicative exchange. These results reveal that infants’ attention to social agents and communication catalyzes a fundamental achievement of human learning.


Developmental Psychology | 2016

Repetition across successive sentences facilitates young children’s word learning.

Jessica F. Schwab; Casey Lew-Williams

Young children who hear more child-directed speech (CDS) tend to have larger vocabularies later in childhood, but the specific characteristics of CDS underlying this link are currently underspecified. The present study sought to elucidate how the structure of language input boosts learning by investigating whether repetition of object labels in successive sentences-a common feature of natural CDS-promotes young childrens efficiency in learning new words. Using a looking-while-listening paradigm, 2-year-old children were taught the names of novel objects, with exposures either repeated across successive sentences or distributed throughout labeling episodes. Results showed successful learning only when label-object pairs had been repeated in blocks of successive sentences, suggesting that immediate opportunities to detect recurring structure facilitate young childrens learning. These findings offer insight into how the information flow within CDS might influence vocabulary development, and we consider the findings alongside research showing the benefits of distributing information across time. (PsycINFO Database Record


Proceedings of the National Academy of Sciences of the United States of America | 2017

Bilingual infants control their languages as they listen

Krista Byers-Heinlein; Elizabeth Morin-Lessard; Casey Lew-Williams

Significance Bilingual infants must manage two languages in a single developing mind. However, the mechanisms that enable young bilinguals to manage their languages over the course of learning remain unclear. Here, we demonstrate that bilingual infants monitor and control their languages during real-time language listening, and do so similarly to bilingual adults. This ability could help bilinguals’ language learning to keep pace with that of their monolingual peers, and may underpin the cognitive advantages enjoyed by bilinguals in both infancy and adulthood. Infants growing up in bilingual homes learn two languages simultaneously without apparent confusion or delay. However, the mechanisms that support this remarkable achievement remain unclear. Here, we demonstrate that infants use language-control mechanisms to preferentially activate the currently heard language during listening. In a naturalistic eye-tracking procedure, bilingual infants were more accurate at recognizing objects labeled in same-language sentences (“Find the dog!”) than in switched-language sentences (“Find the chien!”). Measurements of infants’ pupil size over time indicated that this resulted from increased cognitive load during language switches. However, language switches did not always engender processing difficulties: the switch cost was reduced or eliminated when the switch was from the nondominant to the dominant language, and when it crossed a sentence boundary. Adults showed the same patterns of performance as infants, even though target words were simple and highly familiar. Our results provide striking evidence from infancy to adulthood that bilinguals monitor their languages for efficient comprehension. Everyday practice controlling two languages during listening is likely to explain previously observed bilingual cognitive advantages across the lifespan.


Annual Review of Applied Linguistics | 2017

Specific Referential Contexts Shape Efficiency in Second Language Processing: Three Eye-Tracking Experiments With 6- and 10-Year-Old Children in Spanish Immersion Schools

Casey Lew-Williams

ABSTRACT Efficiency in real-time language processing generally poses a greater challenge to adults learning a second language (L2) than to children learning a first language (L1). A notoriously difficult aspect of language for L2 learners to master is grammatical gender, and previous research has shown that L2 learners do not exploit cues to grammatical gender in ways that resemble L1 speakers. But it is not clear whether this problem is restricted to grammatical gender or whether it reflects a broader difficulty with processing local relations between words. Moreover, we do not know if immersive L2 environments, relative to typical L2 classrooms, confer advantages in learning regularities between words. In three eye-tracking experiments, 6- and 10-year-old children who were enrolled in Spanish immersion elementary schools listened to sentences with articles that conveyed information about the grammatical gender (Experiment 1), biological gender (Experiment 2), and number of referents in the visual field (Experiment 3). L1 children used articles to guide their attention to target referents in all three experiments. L2 children did not take advantage of articles as cues to grammatical gender, but succeeded in doing so for biological gender and number. Interpretations of these findings focus on how learning experiences interact with the nature of specific referential contexts to shape learners’ efficiency in language processing.


bioRxiv | 2018

Infant and adult brains are coupled to the dynamics of natural communication

Elise A. Piazza; Liat Hasenfratz; Uri Hasson; Casey Lew-Williams

Infancy is the foundational period for learning from adults, and the dynamics of the social environment have long been proposed as central to children’s development. Here we reveal a novel, highly naturalistic approach for studying live interactions between infants and adults. Using functional near-infrared spectroscopy (fNIRS), we simultaneously and continuously measured the brains of infants (9-15 months) and an adult while they communicated and played with each other in real time. We found that time-locked neural coupling within dyads was significantly greater when they interacted with each other than with control individuals. In addition, we found that both infant and adult brains continuously tracked the moment-to-moment fluctuations of mutual gaze, infant emotion, and adult speech prosody with high temporal precision. This investigation advances what is currently known about how the brains and behaviors of infants both shape and reflect those of adults during real-life communication.


Developmental Science | 2018

The profile of abstract rule learning in infancy: Meta-analytic and experimental evidence

Hugh Rabagliati; Brock Ferguson; Casey Lew-Williams

Abstract Everyone agrees that infants possess general mechanisms for learning about the world, but the existence and operation of more specialized mechanisms is controversial. One mechanism—rule learning—has been proposed as potentially specific to speech, based on findings that 7‐month‐olds can learn abstract repetition rules from spoken syllables (e.g. ABB patterns: wo‐fe‐fe, ga‐tu‐tu…) but not from closely matched stimuli, such as tones. Subsequent work has shown that learning of abstract patterns is not simply specific to speech. However, we still lack a parsimonious explanation to tie together the diverse, messy, and occasionally contradictory findings in that literature. We took two routes to creating a new profile of rule learning: meta‐analysis of 20 prior reports on infants’ learning of abstract repetition rules (including 1,318 infants in 63 experiments total), and an experiment on learning of such rules from a natural, non‐speech communicative signal. These complementary approaches revealed that infants were most likely to learn abstract patterns from meaningful stimuli. We argue that the ability to detect and generalize simple patterns supports learning across domains in infancy but chiefly when the signal is meaningfully relevant to infants’ experience with sounds, objects, language, and people.


Developmental Psychology | 2018

Infants’ selective use of reliable cues in multidimensional language input.

Christine E. Potter; Casey Lew-Williams

Learning always happens from input that contains multiple structures and multiple sources of variability. Though infants possess learning mechanisms to locate structure in the world, lab-based experiments have rarely probed how infants contend with input that contains many different structures and cues. Two experiments explored infants’ use of two naturally occurring sources of variability—different sounds and different people—to detect regularities in language. Monolingual infants (9–10 months) heard a male and female talker produce two different speech streams, one of which followed a deterministic pattern (e.g., AAB, le-le-di) and one of which did not. For half of the infants, each speaker produced only one of the streams; for the other half of the infants, each speaker produced 50% of each stream. In Experiment 1, each stream consisted of distinct sounds, and infants successfully demonstrated learning regardless of the correspondence between speaker and stream. In Experiment 2, each stream consisted of the same sounds, and infants failed to show learning, even when speakers provided a perfect cue for separating each stream. Thus, monolingual infants can learn in the presence of multiple speech streams, but these experiments suggest that infants may rely more on sound-based rather than speaker-based distinctions when breaking into the structure of incoming information. This selective use of some cues over others highlights infants’ ability to adaptively focus on distinctions that are most likely to be useful as they sort through their inherently multidimensional surroundings.


Language, cognition and neuroscience | 2017

Word segmentation from noise-band vocoded speech

Tina M. Grieco-Calub; Katherine M. Simeon; Hillary E. Snyder; Casey Lew-Williams

ABSTRACT Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated words (Experiment 2), or slowed by 33% (Experiment 3). Participants were tested on segmentation of familiar vs. novel syllable sequences and on recognition of individual syllables. As expected, vocoding hindered both word segmentation and syllable recognition. The addition of isolated words, but not slowed speech, improved segmentation. We conclude that syllable recognition is necessary but not sufficient for successful word segmentation, and that isolated words can facilitate listeners’ access to the structure of acoustically degraded speech.

Collaboration


Dive into the Casey Lew-Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jenny R. Saffran

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Melissa Kline

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge