Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lori L. Holt is active.

Publication


Featured researches published by Lori L. Holt.


Trends in Cognitive Sciences | 2006

Are there interactive processes in speech perception

James L. McClelland; Lori L. Holt

Lexical information facilitates speech perception, especially when sounds are ambiguous or degraded. The interactive approach to understanding this effect posits that this facilitation is accomplished through bi-directional flow of information, allowing lexical knowledge to influence pre-lexical processes. Alternative autonomous theories posit feed-forward processing with lexical influence restricted to post-perceptual decision processes. We review evidence supporting the prediction of interactive models that lexical influences can affect pre-lexical mechanisms, triggering compensation, adaptation and retuning of phonological processes generally taken to be pre-lexical. We argue that these and other findings point to interactive processing as a fundamental principle for perception of speech and other modalities.


Trends in Cognitive Sciences | 2009

Reflections on mirror neurons and speech perception

Andrew J. Lotto; Gregory Hickok; Lori L. Holt

The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT.


Journal of the Acoustical Society of America | 2000

Neighboring spectral content influences vowel identification

Lori L. Holt; Andrew J. Lotto; Keith R. Kluender

Four experiments explored the relative contributions of spectral content and phonetic labeling in effects of context on vowel perception. Two 10-step series of CVC syllables ([bVb] and [dVd]) varying acoustically in F2 midpoint frequency and varying perceptually in vowel height from [delta] to [epsilon] were synthesized. In a forced-choice identification task, listeners more often labeled vowels as [delta] in [dVd] context than in [bVb] context. To examine whether spectral content predicts this effect, nonspeech-speech hybrid series were created by appending 70-ms sine-wave glides following the trajectory of CVC F2s to 60-ms members of a steady-state vowel series varying in F2 frequency. In addition, a second hybrid series was created by appending constant-frequency sine-wave tones equivalent in frequency to CVC F2 onset/offset frequencies. Vowels flanked by frequency-modulated glides or steady-state tones modeling [dVd] were more often labeled as [delta] than were the same vowels surrounded by nonspeech modeling [bVb]. These results suggest that spectral content is important in understanding vowel context effects. A final experiment tested whether spectral content can modulate vowel perception when phonetic labeling remains intact. Voiceless consonants, with lower-amplitude more-diffuse spectra, were found to exert less of an influence on vowel perception than do their voiced counterparts. The data are discussed in terms of a general perceptual account of context effects in speech perception.


Attention Perception & Psychophysics | 2010

Speech perception as categorization

Lori L. Holt; Andrew J. Lotto

Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.


Journal of the Acoustical Society of America | 2006

The mean matters: Effects of statistically defined nonspeech spectral distributions on speech categorization

Lori L. Holt

Adjacent speech, and even nonspeech, contexts influence phonetic categorization. Four experiments investigated how preceding sequences of sine-wave tones influence phonetic categorization. This experimental paradigm provides a means of investigating the statistical regularities of acoustic events that influence online speech categorization and, reciprocally, reveals regularities of the sound environment tracked by auditory processing. The tones comprising the sequences were drawn from distributions sampling different acoustic frequencies. Results indicate that whereas the mean of the distributions predicts contrastive shifts in speech categorization, variability of the distributions has little effect. Moreover, speech categorization is influenced by the global mean of the tone sequence, without significant influence of local statistical regularities within the tone sequence. Further arguing that the effect is strongly related to the average spectrum of the sequence, notched noise spectral complements of the tone sequences produce a complementary effect on speech categorization. Lastly, these effects are modulated by the number of tones in the acoustic history and the overall duration of the sequence, but not by the density with which the distribution defining the sequence is sampled. Results are discussed in light of stimulus-specific adaptation to statistical regularity in the acoustic input and a speculative link to talker normalization is postulated.


Hearing Research | 2002

Behavioral examinations of the level of auditory processing of speech context effects.

Lori L. Holt; Andrew J. Lotto

One of the central findings of speech perception is that identical acoustic signals can be perceived as different speech sounds depending on adjacent speech context. Although these phonetic context effects are ubiquitous in speech perception, their neural mechanisms remain largely unknown. The present work presents a review of recent data suggesting that spectral content of speech mediates phonetic context effects and argues that these effects are likely to be governed by general auditory processes. A descriptive framework known as spectral contrast is presented as a means of interpreting these findings. Finally, and most centrally, four behavioral experiments that begin to delineate the level of the auditory system at which interactions among stimulus components occur are described. Two of these experiments investigate the influence of diotic versus dichotic presentation upon two phonetic context effects. Results indicate that context effects remain even when context is presented to the ear contralateral to that of the target syllable. The other two experiments examine the time course of phonetic context effects by manipulating the silent interval between context and target syllables. These studies reveal that phonetic context effects persist for hundreds of milliseconds. Results are interpreted in terms of auditory mechanism with particular attention to the putative link between auditory enhancement and phonetic context effects.


Current Directions in Psychological Science | 2008

Speech Perception Within an Auditory Cognitive Science Framework

Lori L. Holt; Andrew J. Lotto

The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, showing that the latter is constrained and influenced by operating characteristics of the auditory system and that our understanding of the processes involved in speech perception is enhanced by study within a more general framework. The disconnect between the two areas of research has stunted the development of a truly interdisciplinary science, but there is an opportunity for great strides in understanding with the development of an integrated field of auditory cognitive science.


Psychonomic Bulletin & Review | 2006

An interactive Hebbian account of lexically guided tuning of speech perception

James L. McClelland; Lori L. Holt

We describe an account of lexically guided tuning of speech perception based on interactive processing and Hebbian learning. Interactive feedback provides lexical information to prelexical levels, and Hebbian learning uses that information to retune the mapping from auditory input to prelexical representations of speech. Simulations of an extension of the TRACE model of speech perception are presented that demonstrate the efficacy of this mechanism. Further simulations show that acoustic similarity can account for the patterns of speaker generalization. This account addresses the role of lexical information in guiding both perception and learning with a single set of principles of information propagation.


Journal of the Acoustical Society of America | 2005

Incidental categorization of spectrally complex non-invariant auditory stimuli in a computer game task.

Travis Wade; Lori L. Holt

This study examined perceptual learning of spectrally complex nonspeech auditory categories in an interactive multi-modal training paradigm. Participants played a computer game in which they navigated through a three-dimensional space while responding to animated characters encountered along the way. Characters’ appearances in the game correlated with distinctive sound category distributions, exemplars of which repeated each time the characters were encountered. As the game progressed, the speed and difficulty of required tasks increased and characters became harder to identify visually, so quick identification of approaching characters by sound patterns was, although never required or encouraged, of gradually increasing benefit. After 30 min of play, participants performed a categorization task, matching sounds to characters. Despite not being informed of audio-visual correlations, participants exhibited reliable learning of these patterns at posttest. Categorization accuracy was related to several measu...


Attention Perception & Psychophysics | 2006

Putting phonetic context effects into context: a commentary on Fowler (2006).

Andrew J. Lotto; Lori L. Holt

On the basis of a review of the literature and three new experiments, Fowler (2006) concludes that acontrast account for phonetic context effects is not tenable and is inferior to a gestural account. We believe that this conclusion is premature and that it is based on a restricted set of assumptions about a general perceptual account. Here, we briefly address the criticisms of Fowler (2006), with the intent of clarifying what a general auditory and learning approach to speech perception entails.

Collaboration


Dive into the Lori L. Holt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith R. Kluender

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sung-Joo Lim

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Travis Wade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Julie A. Fiez

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Ran Liu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jingyuan Huang

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge