Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith R. Kluender is active.

Publication


Featured researches published by Keith R. Kluender.


Attention Perception & Psychophysics | 1998

General contrast effects in speech perception: Effect of preceding liquid on stop consonant identification

Andrew J. Lotto; Keith R. Kluender

When members of a series of synthesized stop consonants varying acoustically inF3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying inF3-onset frequency (/da/—/ga/) were preceded by speech versions or nonspeech analogues of /al/ and lav I. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker’s productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency ofF3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.


Journal of the Acoustical Society of America | 2000

Neighboring spectral content influences vowel identification

Lori L. Holt; Andrew J. Lotto; Keith R. Kluender

Four experiments explored the relative contributions of spectral content and phonetic labeling in effects of context on vowel perception. Two 10-step series of CVC syllables ([bVb] and [dVd]) varying acoustically in F2 midpoint frequency and varying perceptually in vowel height from [delta] to [epsilon] were synthesized. In a forced-choice identification task, listeners more often labeled vowels as [delta] in [dVd] context than in [bVb] context. To examine whether spectral content predicts this effect, nonspeech-speech hybrid series were created by appending 70-ms sine-wave glides following the trajectory of CVC F2s to 60-ms members of a steady-state vowel series varying in F2 frequency. In addition, a second hybrid series was created by appending constant-frequency sine-wave tones equivalent in frequency to CVC F2 onset/offset frequencies. Vowels flanked by frequency-modulated glides or steady-state tones modeling [dVd] were more often labeled as [delta] than were the same vowels surrounded by nonspeech modeling [bVb]. These results suggest that spectral content is important in understanding vowel context effects. A final experiment tested whether spectral content can modulate vowel perception when phonetic labeling remains intact. Voiceless consonants, with lower-amplitude more-diffuse spectra, were found to exert less of an influence on vowel perception than do their voiced counterparts. The data are discussed in terms of a general perceptual account of context effects in speech perception.


Speech Communication | 2003

Sensitivity to change in perception of speech

Keith R. Kluender; Jeffry A. Coady; Michael Kiefte

Perceptual systems in all modalities are predominantly sensitive to stimulus change, and many examples of perceptual systems responding to change can be portrayed as instances of enhancing contrast. Multiple findings from perception experiments serve as evidence for spectral contrast explaining fundamental aspects of perception of coarticulated speech, and these findings are consistent with a broad array of known psychoacoustic and neurophysiological phenomena. Beyond coarticulation, important characteristics of speech perception that extend across broader spectral and temporal ranges may best be accounted for by the constant calibration of perceptual systems to maximize sensitivity to change.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Cochlea-scaled entropy, not consonants, vowels, or time, best predicts speech intelligibility.

Christian E. Stilp; Keith R. Kluender

Speech sounds are traditionally divided into consonants and vowels. When only vowels or only consonants are replaced by noise, listeners are more accurate understanding sentences in which consonants are replaced but vowels remain. From such data, vowels have been suggested to be more important for understanding sentences; however, such conclusions are mitigated by the fact that replaced consonant segments were roughly one-third shorter than vowels. We report two experiments that demonstrate listener performance to be better predicted by simple psychoacoustic measures of cochlea-scaled spectral change across time. First, listeners identified sentences in which portions of consonants (C), vowels (V), CV transitions, or VC transitions were replaced by noise. Relative intelligibility was not well accounted for on the basis of Cs, Vs, or their transitions. In a second experiment, distinctions between Cs and Vs were abandoned. Instead, portions of sentences were replaced on the basis of cochlea-scaled spectral entropy (CSE). Sentence segments having relatively high, medium, or low entropy were replaced with noise. Intelligibility decreased linearly as the amount of replaced CSE increased. Duration of signal replaced and proportion of consonants/vowels replaced fail to account for listener data. CSE corresponds closely with the linguistic construct of sonority (or vowel-likeness) that is useful for describing phonological systematicity, especially syllable composition. Results challenge traditional distinctions between consonants and vowels. Speech intelligibility is better predicted by nonlinguistic sensory measures of uncertainty (potential information) than by orthodox physical acoustic measures or linguistic constructs.


Journal of the Acoustical Society of America | 1983

A possible auditory basis for internal structure of phonetic categories

Joanne L. Miller; Cynthia M. Connine; Trude M. Schermer; Keith R. Kluender

We used a selective adaptation procedure to investigate the possibility that differences in the degree to which stimuli within a phonetic category are considered to be good exemplars of the category--that is, differences in perceived category goodness--have a basis at a prephonetic, auditory level of processing. For three different phonetic contrasts (/b-p/, /d-g/, /b-w/), we assessed the relative magnitude of adaptation along a stimulus continuum produced by a variety of stimuli from the continuum belonging to a given phonetic category. For all three phonetic contrasts, nonmonotonic adaptation functions were obtained: As the adaptor moved away from the category boundary, there was an initial increase in adaptation, followed by a subsequent decrease. On the assumption that selective adaptation taps a prephonetic, auditory level of processing, these findings permit the following conclusions. First, at an auditory level there is a limit on the range of stimuli along a continuum that is treated as relevant to a given contrast; that is, the stimuli along a continuum are effectively grouped into auditory categories. Second, stimuli within an auditory category vary in their effectiveness as category members, providing an internal structure to the categories. Finally, this internal category structure at the auditory level, revealed by the adaptation procedure, may provide a basis for differences in perceived category goodness at the phonetic level.


Attention Perception & Psychophysics | 1992

Effects of glide slope, noise intensity, and noise duration on the extrapolation of FM glides through noise

Keith R. Kluender

Listeners are quite adept at maintaining integrated perceptual events in environments that are frequently noisy. Three experiments were conducted to assess the mechanisms by which listeners maintain continuity for upward sinusoidal glides that are interrupted by a period of broadband noise. The first two experiments used stimulus complexes consisting of three parts: prenoise glide, broadband noise interval, and postnoise glide. For a given prenoise glide and noise interval, the subject’s task was to adjust the onset frequency of a same-slope postnoise glide so that, together with the prenoise glide and noise, the complex sounded as “smooth and continuous as possible.” The slope of the glides (1.67, 3.33, 5, and 6.67 Bark/sec) as well as the duration (50, 200, and 350 msec) and relative level of the interrupting noise (0, −6, and −12 dB S/N) were varied. For all but the shallowest glides, subjects consistently adjusted the offset portion of the glide to frequencies lower than predicted by accurate interpolation of the prenoise portion. Curiously, for the shallowest glides, subjects consistently selected postnoise glide onset-frequency values higher than predicted by accurate extrapolation of the prenoise glide. There was no effect of noise level on subjects’ adjustments in the first two experiments. The third experiment used a signal detection task to measure the phenomenal experience of continuity through the noise. Frequency glides were either present or absent during the noise for stimuli like those used in the first two experiments as well as for stimuli that had no prenoise or postnoise glides. Subjects were more likely to report the presence of glides in the noise when none occurred (false positives) when noise was shorter or of greater relative level and when glides were present adjacent to the noise.


International Journal of Language & Communication Disorders | 2010

Role of phonotactic frequency in nonword repetition by children with specific language impairments

Jeffry A. Coady; Julia L. Evans; Keith R. Kluender

BACKGROUND Children with specific language impairments (SLI) repeat nonwords less accurately than typically developing children, suggesting a phonological deficit. Much work has attempted to explain these results in terms of a phonological memory deficit. However, subsequent work revealed that these results might be explained better as a deficit in phonological sensitivity. AIMS This study used a nonword repetition task to examine how children with SLI extract phonological regularities from their language input. METHODS & PROCEDURES Eighteen English-speaking children with SLI (7;3-10;6) and 18 age-matched controls participated in two English nonword repetition tasks. Three- and four-syllable nonwords varied in a single phonotactic frequency manipulation, either consonant frequency or phoneme co-occurrence frequency, while all other factors were held constant. Repetitions were scored in terms of accuracy as either the percentage of phonemes correctly produced or phoneme co-occurrences (diphones) correctly produced. In addition, onset-to-onset reaction times and repetition durations were measured. OUTCOMES & RESULTS Accuracy results revealed significant group, length, and phonotactic frequency effects. Children with SLI repeated nonwords less accurately than age-matched peers, and all children repeated three-syllable nonwords and those with higher frequency phonotactic patterns more accurately. However, phonotactic frequency by group interactions were not significant. Timing results were mixed, with group reaction time differences for co-occurrence frequency, but not consonant frequency, and no group repetition duration differences. CONCLUSIONS & IMPLICATIONS While children with SLI were less accurate overall, non-significant interactions indicate that both groups of children were comparably affected by differences in consonant and diphone frequency.


Psychological Science | 2007

Does Grammar Constrain Statistical Learning?: Commentary on Bonatti, Peña, Nespor, and Mehler (2005)

James L. Keidel; Keith R. Kluender; Mark S. Seidenberg

Many studies indicate that statistical learning plays an important role in language acquisition (Saffran & Sahni, in press). Bonatti, Peña, Nespor, and Mehler (2005) presented evidence that such learning is guided by innate grammatical knowledge, a potentially important discovery. Here we provide data suggesting that their results may instead have resulted from facts about French that can be learned from experience. In the study by Bonatti et al. (2005), subjects learned an artificial language in which the words were strings of alternating consonants (C) and vowels (V): CVCVCV (e.g., puragi). In one condition, transition probabilities between consonants within a word were 1.0, but vowels varied: For example, the consonants p_R_g_ appeared only in that order, but each blank could be filled by multiple vowels. In a second condition, within-word transition probabilities between vowels were 1.0, and consonants varied (e.g., _u_e_ã). The key finding was that subjects picked up on the statistical regularity for consonants, but not vowels.


Attention Perception & Psychophysics | 2010

Auditory color constancy: Calibration to reliable spectral properties across nonspeech context and targets

Christian E. Stilp; Joshua M. Alexander; Michael Kiefte; Keith R. Kluender

Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.


Journal of the Acoustical Society of America | 2003

Effects of contrast between onsets of speech and other complex spectra

Jeffry A. Coady; Keith R. Kluender; William S. Rhode

Previous studies using speech and nonspeech analogs have shown that auditory mechanisms which serve to enhance spectral contrast contribute to perception of coarticulated speech for which spectral properties assimilate over time. In order to better understand the nature of contrastive auditory processes, a series of CV syllables varying acoustically in F2-onset frequency and perceptually from /ba/ to /da/ was identified following a variety of spectra including three-peak renditions of [e] and [o], one-peak simulations of only F2, and spectral complements of these spectra for which peaks are replaced with troughs. Results for three-versus one-peak (or trough) precursor spectra were practically indistinguishable, suggesting that effects were spectrally local and not dependent upon perception of precursors as speech. Effects of complementary (trough) spectra had complementary effects on perception of following stops; however, effects for spectral complements were particularly dependent upon the interval between precursor and CV onsets. Results from these studies cannot be explained by simple masking or adaptation of suppression. Instead, they provide evidence for the existence of processes that selectively enhance contrast between onset spectra of neighboring sounds, and these processes are relevant for perception of connected speech.

Collaboration


Dive into the Keith R. Kluender's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lori L. Holt

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Randy L. Diehl

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julia L. Evans

University of California

View shared research outputs
Top Co-Authors

Avatar

Timothy T. Rogers

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ellen M. Parker

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge