Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elika Bergelson is active.

Publication


Featured researches published by Elika Bergelson.


Proceedings of the National Academy of Sciences of the United States of America | 2012

At 6-9 months, human infants know the meanings of many common nouns.

Elika Bergelson; Daniel Swingley

It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others’ goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.


Language Learning and Development | 2015

Early Word Comprehension in Infants: Replication and Extension.

Elika Bergelson; Daniel Swingley

A handful of recent experimental reports have shown that infants of 6–9 months know the meanings of some common words. Here, we replicate and extend these findings. With a new set of items, we show that when young infants (age 6–16 months, n = 49) are presented with side-by-side video clips depicting various common early words, and one clip is named in a sentence, they look at the named video at above-chance rates. We demonstrate anew that infants understand common words by 6–9 months and that performance increases substantially around 14 months. The results imply that 6- to 9-month-olds’ failure to understand words not referring to objects (verbs, adjectives, performatives) in a similar prior study is not attributable to the use of dynamic video depictions. Thus, 6- to 9-month-olds’ experience of spoken language includes some understanding of common words for concrete objects, but relatively impoverished comprehension of other words.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Nature and origins of the lexicon in 6-mo-olds

Elika Bergelson; Richard N. Aslin

Significance Infants start understanding words at 6 mo, when they also excel at subtle speech–sound distinctions and simple multimodal associations, but don’t yet talk, walk, or point. However, true word learning requires integrating the speech stream with the world and learning how words interrelate. Using eye tracking, we show that neophyte word learners already represent the semantic relations between words. We further show that these same infants’ word learning has ties to their environment: The more they hear labels for what they’re looking at and attending to, the stronger their overall comprehension. These results provide an integrative approach for investigating home environment effects on early language and suggest that language delays could be detected in early infancy for possible remediation. Recent research reported the surprising finding that even 6-mo-olds understand common nouns [Bergelson E, Swingley D (2012) Proc Natl Acad Sci USA 109:3253–3258]. However, is their early lexicon structured and acquired like older learners? We test 6-mo-olds for a hallmark of the mature lexicon: cross-word relations. We also examine whether properties of the home environment that have been linked with lexical knowledge in older children are detectable in the initial stage of comprehension. We use a new dataset, which includes in-lab comprehension and home measures from the same infants. We find evidence for cross-word structure: On seeing two images of common nouns, infants looked significantly more at named target images when the competitor images were semantically unrelated (e.g., milk and foot) than when they were related (e.g., milk and juice), just as older learners do. We further find initial evidence for home-lab links: common noun “copresence” (i.e., whether words’ referents were present and attended to in home recordings) correlated with in-lab comprehension. These findings suggest that, even in neophyte word learners, cross-word relations are formed early and the home learning environment measurably helps shape the lexicon from the outset.


PLOS ONE | 2013

Young toddlers' word comprehension is flexible and efficient.

Elika Bergelson; Daniel Swingley

Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children’s gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers’ spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called “cup” or “juice”) and pictures that had only one likely name for toddlers (such as “apple”), using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months). Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.


Language Learning and Development | 2017

Semantic Specificity in One-Year-Olds' Word Comprehension.

Elika Bergelson; Richard N. Aslin

ABSTRACT The present study investigated infants’ knowledge about familiar nouns. Infants (n = 46, 12–20-month-olds) saw two-image displays of familiar objects, or one familiar and one novel object. Infants heard either a matching word (e.g. “foot’ when seeing foot and juice), a related word (e.g. “sock” when seeing foot and juice) or a nonce word (e.g. “fep” when seeing a novel object and dog). Across the whole sample, infants reliably fixated the referent on matching and nonce trials. On the critical related trials we found increasingly less looking to the incorrect (but related) image with age. These results suggest that one-year-olds look at familiar objects both when they hear them labeled and when they hear related labels, to similar degrees, but over the second year increasingly rely on semantic fit. We suggest that infants’ initial semantic representations are imprecise, and continue to sharpen over the second postnatal year.


Cognitive Science | 2017

Semantic Networks Generated from Early Linguistic Input.

Andrei Amatuni; Elika Bergelson

Semantic networks generated from different word corpora show common structural characteristics, including high degrees of clustering, short average path lengths, and scale free degree distributions. Previous research has disagreed about whether these features emerge from internally or externally driven properties (i.e. words already in the lexicon vs. regularities in the external world), mapping onto preferential attachment and preferential acquisition accounts, respectively (Steyvers & Tenenbaum, 2005; Hills, Maouene, Maouene, Sheya, & Smith, 2009). Such accounts suggest that inherent semantic structure shapes new lexical growth. Here we extend previous work by creating semantic networks using the SEEDLingS corpus, a newly collected corpus of linguistic input to infants. Using a recently developed LSA-like approach (GLoVe vectors), we confirm the presence of previously reported structural characteristics, but only in certain ranges of semantic similarity space. Our results confirm the robustness of certain aspects of network organization, and provide novel evidence in support of preferential acquisition accounts.


Developmental Science | 2018

What Do North American Babies Hear? A large-scale cross-corpus analysis

Elika Bergelson; Marisa Casillas; Melanie Soderstrom; Amanda Seidl; Anne S. Warlaumont; Andrei Amatuni

A range of demographic variables influences how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2-3× more speech from females than males. Second, children in higher-maternal education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of childrens language environments. The audio recordings, annotations, and annotation software are readily available for reuse and reanalysis by other researchers.


PLOS ONE | 2013

Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence

Elika Bergelson; Michael Shvartsman; William J. Idsardi

We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in ‘cut’). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12–11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12–24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.


conference of the international speech communication association | 2016

Virtual Machines and Containers as a Platform for Experimentation.

Florian Metze; Eric Riebling; Anne S. Warlaumont; Elika Bergelson

Research on computational speech processing has traditionally relied on the availability of a relatively large and complex infrastructure, which encompasses data (text and audio), tools (feature extraction, model training, scoring, possibly on-line and off-line, etc.), glue code, and computing. Traditionally, it has been very hard to move experiments from one site to another, and to replicate experiments. With the increasing availability of shared platforms such as commercial cloud computing platforms or publicly funded super-computing centers, there is a need and an opportunity to abstract the experimental environment from the hardware, and distribute complete setups as a virtual machine, a container, or some other shareable resource, that can be deployed and worked with anywhere. In this paper, we discuss our experience with this concept and present some tools that the community might find useful. We outline, as a case study, how such tools can be applied to a naturalistic language acquisition audio corpus.


Nature | 2008

How music speaks to us

David Poeppel; Elika Bergelson

This book is an intellectual tour de force, raising many more issues than recent popular works by, for example, Oliver Sacks and Daniel Levitin. Not one for the bus, beach or bathtub, Music, Language, and the Brain requires focused engagement, but its rewards are rich. Aniruddh Patel offers a thorough analysis of music cognition and its relation to language, and outlines an ambitious and innovative research programme that deepens our understanding of cognition in general. Music and speech share basic sound elements, and Patel starts by highlighting the similarities and differences between how auditory signals work. The book then delves

Collaboration


Dive into the Elika Bergelson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christina Bergmann

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melissa Kline

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Swingley

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge