Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Lotto is active.

Publication


Featured researches published by Andrew J. Lotto.


Trends in Cognitive Sciences | 2009

Reflections on mirror neurons and speech perception

Andrew J. Lotto; Gregory Hickok; Lori L. Holt

The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT.


Attention Perception & Psychophysics | 1998

General contrast effects in speech perception: Effect of preceding liquid on stop consonant identification

Andrew J. Lotto; Keith R. Kluender

When members of a series of synthesized stop consonants varying acoustically inF3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying inF3-onset frequency (/da/—/ga/) were preceded by speech versions or nonspeech analogues of /al/ and lav I. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker’s productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency ofF3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.


Journal of the Acoustical Society of America | 2000

Neighboring spectral content influences vowel identification

Lori L. Holt; Andrew J. Lotto; Keith R. Kluender

Four experiments explored the relative contributions of spectral content and phonetic labeling in effects of context on vowel perception. Two 10-step series of CVC syllables ([bVb] and [dVd]) varying acoustically in F2 midpoint frequency and varying perceptually in vowel height from [delta] to [epsilon] were synthesized. In a forced-choice identification task, listeners more often labeled vowels as [delta] in [dVd] context than in [bVb] context. To examine whether spectral content predicts this effect, nonspeech-speech hybrid series were created by appending 70-ms sine-wave glides following the trajectory of CVC F2s to 60-ms members of a steady-state vowel series varying in F2 frequency. In addition, a second hybrid series was created by appending constant-frequency sine-wave tones equivalent in frequency to CVC F2 onset/offset frequencies. Vowels flanked by frequency-modulated glides or steady-state tones modeling [dVd] were more often labeled as [delta] than were the same vowels surrounded by nonspeech modeling [bVb]. These results suggest that spectral content is important in understanding vowel context effects. A final experiment tested whether spectral content can modulate vowel perception when phonetic labeling remains intact. Voiceless consonants, with lower-amplitude more-diffuse spectra, were found to exert less of an influence on vowel perception than do their voiced counterparts. The data are discussed in terms of a general perceptual account of context effects in speech perception.


Attention Perception & Psychophysics | 2010

Speech perception as categorization

Lori L. Holt; Andrew J. Lotto

Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.


Hearing Research | 2002

Behavioral examinations of the level of auditory processing of speech context effects.

Lori L. Holt; Andrew J. Lotto

One of the central findings of speech perception is that identical acoustic signals can be perceived as different speech sounds depending on adjacent speech context. Although these phonetic context effects are ubiquitous in speech perception, their neural mechanisms remain largely unknown. The present work presents a review of recent data suggesting that spectral content of speech mediates phonetic context effects and argues that these effects are likely to be governed by general auditory processes. A descriptive framework known as spectral contrast is presented as a means of interpreting these findings. Finally, and most centrally, four behavioral experiments that begin to delineate the level of the auditory system at which interactions among stimulus components occur are described. Two of these experiments investigate the influence of diotic versus dichotic presentation upon two phonetic context effects. Results indicate that context effects remain even when context is presented to the ear contralateral to that of the target syllable. The other two experiments examine the time course of phonetic context effects by manipulating the silent interval between context and target syllables. These studies reveal that phonetic context effects persist for hundreds of milliseconds. Results are interpreted in terms of auditory mechanism with particular attention to the putative link between auditory enhancement and phonetic context effects.


Current Directions in Psychological Science | 2008

Speech Perception Within an Auditory Cognitive Science Framework

Lori L. Holt; Andrew J. Lotto

The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, showing that the latter is constrained and influenced by operating characteristics of the auditory system and that our understanding of the processes involved in speech perception is enhanced by study within a more general framework. The disconnect between the two areas of research has stunted the development of a truly interdisciplinary science, but there is an opportunity for great strides in understanding with the development of an integrated field of auditory cognitive science.


Health Psychology | 2003

Individual Differences in Self-Assessed Health: An Information-Processing Investigation of Health and Illness Cognition

Paula G. Williams; Michelle S. Wasserman; Andrew J. Lotto

In 2 studies, the relation between measures of self-assessed health (SAH) and automatic processing of health-relevant information was investigated. In Study 1, 84 male and 86 female undergraduate students completed a modified Stroop task. Results indicated that participants with poorer SAH showed enhanced interference effects for illness versus non-illness words. In Study 2, 27 male and 30 female undergraduate students completed a self-referent encoding task. Results offered a conceptual replication and extension of Study 1 by confirming the specificity of the relation between SAH measures and automatic processing of health (vs. negative or positive general trait) information. These studies provide evidence that individual differences in SAH are reflected in schematic processing of health-relevant information.


Attention Perception & Psychophysics | 2006

Putting phonetic context effects into context: a commentary on Fowler (2006).

Andrew J. Lotto; Lori L. Holt

On the basis of a review of the literature and three new experiments, Fowler (2006) concludes that acontrast account for phonetic context effects is not tenable and is inferior to a gestural account. We believe that this conclusion is premature and that it is based on a restricted set of assumptions about a general perceptual account. Here, we briefly address the criticisms of Fowler (2006), with the intent of clarifying what a general auditory and learning approach to speech perception entails.


Frontiers in Psychology | 2012

Tuned with a Tune: Talker Normalization via General Auditory Processes

Erika J. C. Laing; Ran Liu; Andrew J. Lotto; Lori L. Holt

Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS) of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by non-speech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.


Journal of the Acoustical Society of America | 2003

Central locus for nonspeech context effects on phonetic identification (L)

Andrew J. Lotto; Sarah C. Sullivan; Lori L. Holt

Recently, Holt and Lotto [Hear. Res. 167, 156–169 (2002)] reported that preceding speech sounds can influence phonetic identification of a target syllable even when the context sounds are presented to the opposite ear or when there is a long intervening silence. These results led them to conclude that phonetic context effects are mostly due to nonperipheral auditory interactions. In the present paper, similar presentation manipulations were made with nonspeech context sounds. The results agree qualitatively with the results for speech contexts. Taken together, these findings suggest that the same nonperipheral mechanisms may be responsible for effects of both speech and nonspeech context on phonetic identification.

Collaboration


Dive into the Andrew J. Lotto's collaboration.

Top Co-Authors

Avatar

Lori L. Holt

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Keith R. Kluender

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randy L. Diehl

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Julie M. Liss

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge