Keith S. Apfelbaum
University of Iowa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keith S. Apfelbaum.
Cognitive Science | 2011
Keith S. Apfelbaum; Bob McMurray
At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997). However, variability in nonphonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both noncontrastive information (e.g., cues to speaker voice) and meaningful linguistic cues (e.g., place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing at this age, and that the structure of noninformative cues has critical influences on word learning abilities.
Developmental Psychology | 2013
Keith S. Apfelbaum; Eliot Hazeltine; Bob McMurray
Early reading abilities are widely considered to derive in part from statistical learning of regularities between letters and sounds. Although there is substantial evidence from laboratory work to support this, how it occurs in the classroom setting has not been extensively explored; there are few investigations of how statistics among letters and sounds influence how children actually learn to read or what principles of statistical learning may improve learning. We examined 2 conflicting principles that may apply to learning grapheme-phoneme-correspondence (GPC) regularities for vowels: (a) variability in irrelevant units may help children derive invariant relationships and (b) similarity between words may force children to use a deeper analysis of lexical structure. We trained 224 first-grade students on a small set of GPC regularities for vowels, embedded in words with either high or low consonant similarity, and tested their generalization to novel tasks and words. Variability offered a consistent benefit over similarity for trained and new words in both trained and new tasks.
Psychonomic Bulletin & Review | 2011
Keith S. Apfelbaum; Sheila E. Blumstein; Bob McMurray
Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words.
Language, cognition and neuroscience | 2014
Keith S. Apfelbaum; Natasha E. Bullock-Rest; Ariane E. Rhone; Allard Jongman; Bob McMurray
The speech signal is notoriously variable, with the same phoneme realised differently depending on factors like talker and phonetic context. Variance in the speech signal has led to a proliferation of theories of how listeners recognise speech. A promising approach, supported by computational modelling studies, is contingent categorisation, wherein incoming acoustic cues are computed relative to expectations. We tested contingent encoding empirically. Listeners were asked to categorise fricatives in CV syllables constructed by splicing the fricative from one CV syllable with the vowel from another CV syllable. The two spliced syllables always contained the same fricative, providing consistent bottom-up cues; however on some trials, the vowel and/or talker mismatched between these syllables, giving conflicting contextual information. Listeners were less accurate and slower at identifying the fricatives in mismatching splices. This suggests that listeners rely on context information beyond bottom-up acoustic cues during speech perception, providing support for contingent categorisation.
Language Learning and Development | 2015
Marcus E. Galle; Keith S. Apfelbaum; Bob McMurray
Recent work has demonstrated that the addition of multiple talkers during habituation improves 14-month-olds’ performance in the switch task (Rost & McMurray, 2009). While the authors suggest that this boost in performance is due to the increase in acoustic variability (Rost & McMurray, 2010), it is also possible that there is something crucial about the presence of multiple talkers that is driving this performance. To determine whether or not acoustic variability in and of itself is beneficial in early word learning tasks like the switch task, we tested 14-month-old infants in a version of the switch task using acoustically variable auditory stimuli produced by a single speaker. Results show that 14-month-olds are able to learn phonemically similar words within the switch task with increased acoustic variability and without the presence of multiple talkers.
Journal of the Acoustical Society of America | 2012
Keith S. Apfelbaum; Bob McMurray
Speech perception faces variability in the realization of phonemes from vocal tract differences, coarticulation, and speaking rate. While the mechanisms accommodating this variability have long been debated, recent computational work suggests that listeners succeed by processing input relative to expectations about how context affects acoustics. McMurray and Jongman (2011) measured 24 cues for fricatives in 2880 tokens. When effects of talker voice and vowel context were removed with a relative encoding scheme, simple cue-integration/prototype models identified the fricatives at near-human levels of performance. These models were less successful with raw cue values, suggesting some form of compensation is required. However, contingent encoding may not be needed in all categorization architectures. Exemplar models account for context by holistically comparing incoming tokens to others in memory, and connectionist backpropagation models encode stimulus-specific information through their hidden units for bet...
Journal of the Acoustical Society of America | 2012
Ariane E. Rhone; Keith S. Apfelbaum; Bob McMurray
Phonological features are indicated by many acoustic cues (Lisker, 1986). Listeners must thus combine multiple cues to recognize speech (Nearey, 1990). A recent survey of cues to English fricatives identified 24 distinct, useful cues (McMurray & Jongman, 2011). This multi-dimensional nature of speech raises methodological challenges: cues do not contribute equally, and multiple cues contribute to the same phonetic features (e.g. voicing). We offer a solution, using McMurray and Jongman’s (2011) fricative database as a test. We used a logistic regression to predict the fricatives using measurements of 24 cues. We then sum the product of each cue value and its weight from the regression to determine the strength of the evidence for a given feature/phoneme (e.g. degree of “f-ness” vs. “v-ness”) for each token. By computing this for different subsets of cues, we can measure how classes of cues work in tandem. We illustrate this by examining the relative contribution of cues within the frication and those with...
Journal of the Acoustical Society of America | 2012
Efthymia C. Kapnoula; Stephanie Packard; Keith S. Apfelbaum; Bob McMurray; Prahlad Gupta
Existing work suggests that sleep-based consolidation (Gaskell & Dumay, 2003) is required for newly learned words to interact with other words and phonology. Some studies report that meaning may also be needed (Leach and Samuel, 2007), making it unclear whether meaningful representations are required for such interactions. We addressed these issues by examining lexical competition between novel and known words during online word recognition. After a brief training on novel word-forms (without referents), we evaluated whether the newly learned items could compete with known words. During testing, participants heard word stimuli that were made by cross-splicing novel with known word-forms (NEP+NET=NEpT) and the activation of the target-word was quantified using the visual world paradigm. Results showed that the freshly learned word-forms engaged in competition with known words with only 15 minutes of training. These results are important for two reasons: First, lexical integration is initiated very early in learning and does not require associations with semantic representations or sleep-based consolidation. Second, given studies showing that lexical competition plays a critical role in resolving acoustic ambiguity (McMurray, Tanenhaus & Aslin, 2008; McMurray et al, 2009), our results imply that this competition does not have to be between semantically integrated lexical units.
Brain and Language | 2007
Keith S. Apfelbaum; Sheila E. Blumstein; Audrey Kittredge
Cognitive Science | 2016
Keith S. Apfelbaum; Vladimir M. Sloutsky