Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bob McMurray is active.

Publication


Featured researches published by Bob McMurray.


Cognition | 2002

Gradient effects of within-category phonetic variation on lexical access

Bob McMurray; Michael K. Tanenhaus; Richard N. Aslin

In order to determine whether small within-category differences in voice onset time (VOT) affect lexical access, eye movements were monitored as participants indicated which of four pictures was named by spoken stimuli that varied along a 0-40 ms VOT continuum. Within-category differences in VOT resulted in gradient increases in fixations to cross-boundary lexical competitors as VOT approached the category boundary. Thus, fine-grained acoustic/phonetic differences are preserved in patterns of lexical activation for competing lexical candidates and could be used to maximize the efficiency of on-line word recognition.


Developmental Science | 2009

Speaker variability augments phonological processing in early word learning

Gwyneth C. Rost; Bob McMurray

Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still-developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.


Psychological Review | 2012

Word Learning Emerges from the Interaction of Online Referent Selection and Slow Associative Learning.

Bob McMurray; Jessica S. Horst; Larissa K. Samuelson

Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development.


Psychological Review | 2011

What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations.

Bob McMurray; Allard Jongman

Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2,880 fricative productions (Jongman, Wayland, & Wong, 2000) spanning many talker and vowel contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values and manipulated the information in the training set to contrast (a) models based on a small number of invariant cues, (b) models using all cues without compensation, and (c) models in which cues underwent compensation for contextual factors. Compensation was modeled by computing cues relative to expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed.


Cognitive Psychology | 2010

Individual differences in online spoken word recognition: Implications for SLI.

Bob McMurray; Vicki M. Samelson; Sung Hee Lee; J. Bruce Tomblin

Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None of the existing approaches were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment.


Developmental Science | 2009

Statistical learning of phonetic categories: insights from a computational approach.

Bob McMurray; Richard N. Aslin; Joseph C. Toscano

Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infants native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model based on a mixture of Gaussians (MOG) architecture. Statistical learning alone was found to be insufficient for phonetic category learning--an additional competition mechanism was required in order for the categories in the input to be successfully learnt. When competition was added to the MOG architecture, this class of models successfully accounted for developmental enhancement and loss of sensitivity to phonetic contrasts. Moreover, the MOG with competition model was used to explore a potentially important distributional property of early speech categories--sparseness--in which portions of the space between phonetic categories are unmapped. Sparseness was found in all successful models and quickly emerged during development even when the initial parameters favoured continuous representations with no gaps. The implications of these models for phonetic category learning in infants are discussed.


Cognition | 2011

What's New? Children Prefer Novelty in Referent Selection.

Jessica S. Horst; Larissa K. Samuelson; Sarah C. Kucker; Bob McMurray

Determining the referent of a novel name is a critical task for young language learners. The majority of studies on childrens referent selection focus on manipulating the sources of information (linguistic, contextual and pragmatic) that children can use to solve the referent mapping problem. Here, we take a step back and explore how childrens endogenous biases towards novelty and their own familiarity with novel objects influence their performance in such a task. We familiarized 2-year-old children with previously novel objects. Then, on novel name referent selection trials children were asked to select the referent from three novel objects: two previously seen and one completely novel object. Children demonstrated a clear bias to select the most novel object. A second experiment controls for pragmatic responding and replicates this finding. We conclude, therefore, that childrens referent selection is biased by previous exposure and childrens endogenous bias to novelty.


Ear and Hearing | 2014

Longitudinal speech perception and language performance in pediatric cochlear implant users: the effect of age at implantation.

Camille C. Dunn; Elizabeth A. Walker; Jacob Oleson; Maura Kenworthy; Tanya Van Voorst; J. Bruce Tomblin; Haihong Ji; Karen Iler Kirk; Bob McMurray; Marlan Hanson; Bruce J. Gantz

Objectives: Few studies have examined the long-term effect of age at implantation on outcomes using multiple data points in children with cochlear implants. The goal of this study was to determine whether age at implantation has a significant, lasting impact on speech perception, language, and reading performance for children with prelingual hearing loss. Design: A linear mixed-model framework was used to determine the effect of age at implantation on speech perception, language, and reading abilities in 83 children with prelingual hearing loss who received cochlear implants by the age of 4 years. The children were divided into two groups based on their age at implantation: (1) under 2 years of age and (2) between 2 and 3.9 years of age. Differences in model-specified mean scores between groups were compared at annual intervals from 5 to 13 years of age for speech perception, and 7 to 11 years of age for language and reading. Results: After controlling for communication mode, device configuration, and preoperative pure-tone average, there was no significant effect of age at implantation for receptive language by 8 years of age, expressive language by 10 years of age, reading by 7 years of age. In terms of speech-perception outcomes, significance varied between 7 and 13 years of age, with no significant difference in speech-perception scores between groups at ages 7, 11, and 13 years. Children who used oral communication (OC) demonstrated significantly higher speech-perception scores than children who used total communication (TC). OC users tended to have higher expressive language scores than TC users, although this did not reach significance. There was no significant difference between OC and TC users for receptive language or reading scores. Conclusions: Speech perception, language, and reading performance continue to improve over time for children implanted before 4 years of age. The present results indicate that the effect of age at implantation diminishes with time, particularly for higher-order skills such as language and reading. Some children who receive cochlear implants after the age of 2 years have the capacity to approximate the language and reading skills of their earlier-implanted peers, suggesting that additional factors may moderate the influence of age at implantation on outcomes over time.


Psychological Science | 2010

Continuous Perception and Graded Categorization Electrophysiological Evidence for a Linear Relationship Between the Acoustic Signal and Perceptual Encoding of Speech

Joseph C. Toscano; Bob McMurray; Joel Dennhardt; Steven J. Luck

Speech sounds are highly variable, yet listeners readily extract information from them and transform continuous acoustic signals into meaningful categories during language comprehension. A central question is whether perceptual encoding captures acoustic detail in a one-to-one fashion or whether it is affected by phonological categories. We addressed this question in an event-related potential (ERP) experiment in which listeners categorized spoken words that varied along a continuous acoustic dimension (voice-onset time, or VOT) in an auditory oddball task. We found that VOT effects were present through a late stage of perceptual processing (N1 component, ~100 ms poststimulus) and were independent of categorization. In addition, effects of within-category differences in VOT were present at a postperceptual categorization stage (P3 component, ~450 ms poststimulus). Thus, at perceptual levels, acoustic information is encoded continuously, independently of phonological information. Further, at phonological levels, fine-grained acoustic differences are preserved along with category information.


Journal of Experimental Psychology: Human Perception and Performance | 2008

Gradient sensitivity to within-category variation in words and syllables

Bob McMurray; Richard N. Aslin; Michael K. Tanenhaus; Michael J. Spivey; Dana Subik

Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.

Collaboration


Dive into the Bob McMurray's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge