Michael Kiefte
Dalhousie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Kiefte.
Speech Communication | 2003
Keith R. Kluender; Jeffry A. Coady; Michael Kiefte
Perceptual systems in all modalities are predominantly sensitive to stimulus change, and many examples of perceptual systems responding to change can be portrayed as instances of enhancing contrast. Multiple findings from perception experiments serve as evidence for spectral contrast explaining fundamental aspects of perception of coarticulated speech, and these findings are consistent with a broad array of known psychoacoustic and neurophysiological phenomena. Beyond coarticulation, important characteristics of speech perception that extend across broader spectral and temporal ranges may best be accounted for by the constant calibration of perceptual systems to maximize sensitivity to change.
Otology & Neurotology | 2004
Manohar Bance; David P. Morris; Rene G. VanWijhe; Michael Kiefte; W. Robert J. Funnell
Hypothesis: Ossiculoplasty using prosthetic reconstruction with a malleus assembly to the stapes head will result in better transmission of vibrations from the eardrum to the stapes footplate than reconstruction with a tympanic membrane assembly to the stapes head. Both types of reconstruction will be affected by tension of the prosthesis. Background: Theories (and some clinical studies) that the shape of the normal tympanic membrane is important suggest that prosthetic reconstruction to the malleus performs better than reconstruction to the tympanic membrane. This has not been previously tested by directly measuring vibration responses in the human ear. Our previous work suggests that tympanic membrane assembly to the stapes head type prostheses performed best under low tension. This had not been previously tested for malleus assembly to the stapes head type prostheses. Methods: Hydroxyapatite prostheses were used to reconstruct a missing incus defect in a fresh cadaveric human ear model. Two types of prostheses were used, one from the stapes head to the malleus (malleus assembly to the stapes head), the other from the stapes head to the tympanic membrane (tympanic membrane assembly to the stapes head). Stapes footplate center responses were measured using a laser Doppler vibrometer in response to calibrated acoustic frequency sweeps. Results: Tension had a very significant effect on both types of prostheses in the lower frequencies. Loose tension was best overall. The malleus assembly to the stapes head type prostheses consistently performed better than the tympanic membrane assembly to the stapes head type prostheses when stratified for tension. Conclusion: Tension has a significant effect on prosthesis function. Malleus assembly to the stapes head type prostheses generally result in better transmission of vibrations to the stapes footplate than tympanic membrane assembly to the stapes head type prostheses.
Laryngoscope | 2004
David P. Morris; Manohar Bance; Rene G. Van Wijhe; Michael Kiefte; Rachael Smith
Objective: Hearing results from ossiculoplasty are unpredictable. There are many potentially modifiable parameters. One parameter that has not been adequately investigated in the past is the effect of tension on the mechanical functioning of the prosthesis. Our goal was to investigate this parameter further, with the hypothesis that the mechanical functioning of partial ossicular replacement prostheses (PORP) from the stapes head to the eardrum will be affected by the tension that they are placed under.
Journal of Fluency Disorders | 2008
Jennifer J. O'Donnell; Joy Armson; Michael Kiefte
UNLABELLED A multiple single-subject design was used to examine the effects of SpeechEasy on stuttering frequency in the laboratory and in longitudinal samples of speech produced in situations of daily living (SDL). Seven adults who stutter participated, all of whom had exhibited at least 30% reduction in stuttering frequency while using SpeechEasy during previous laboratory assessments. For each participant, speech samples recorded in the laboratory and SDL during device use were compared to samples obtained in those settings without the device. In SDL, stuttering frequencies were recorded weekly for 9-16 weeks during face-to-face and phone conversations. Participants also provided data regarding device tolerance and perceived benefits. Laboratory assessments were conducted at the beginning and the end of the longitudinal data collection in SDL. All seven participants exhibited reduced stuttering in self-formulated speech in the Device compared to No-device condition during the first laboratory assessment. In the second laboratory assessment, four participants exhibited less stuttering and three exhibited more stuttering with the device than without. In SDL, five of seven participants exhibited some instances of reduced stuttering when wearing the device and three of these exhibited relatively stable amounts of stuttering reduction during long-term use. Five participants reported positive changes in speaking-related attitudes and perceptions of stuttering. Further investigation into the short- and long-term effectiveness of SpeechEasy in SDL is warranted. EDUCATIONAL OBJECTIVES The reader will be able to summarize: (1) issues pertinent to evaluating treatment benefits of wearable fluency aids and evaluate (2) the effect of SpeechEasy on stuttering frequency and the perceived benefits of device use in situations of daily living, as assessed weekly over the course of 9-16 weeks of wear, for seven adults who stutter.
Attention Perception & Psychophysics | 2010
Christian E. Stilp; Joshua M. Alexander; Michael Kiefte; Keith R. Kluender
Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.
Journal of the Acoustical Society of America | 2008
Michael Kiefte; Keith R. Kluender
Several experiments are described in which synthetic monophthongs from series varying between /i/ and /u/ are presented following filtered precursors. In addition to F(2), target stimuli vary in spectral tilt by applying a filter that either raises or lowers the amplitudes of higher formants. Previous studies have shown that both of these spectral properties contribute to identification of these stimuli in isolation. However, in the present experiments we show that when a precursor sentence is processed by the same filter used to adjust spectral tilt in the target stimulus, listeners identify synthetic vowels on the basis of F(2) alone. Conversely, when the precursor sentence is processed by a single-pole filter with center frequency and bandwidth identical to that of the F(2) peak of the following vowel, listeners identify synthetic vowels on the basis of spectral tilt alone. These results show that listeners ignore spectral details that are unchanged in the acoustic context. Instead of identifying vowels on the basis of incorrect acoustic information, however (e.g., all vowels are heard as /i/ when second formant is perceptually ignored), listeners discriminate the vowel stimuli on the basis of the more informative spectral property.
Journal of the Acoustical Society of America | 2002
Alberto Recio; William S. Rhode; Michael Kiefte; Keith R. Kluender
Previous studies of auditory-nerve fiber (ANF) representation of vowels in cats and rodents (chinchillas and guinea pigs) have shown that, at amplitudes typical for conversational speech (60-70 dB), neuronal firing rate as a function of characteristic frequency alone provides a poor representation of spectral prominences (e.g., formants) of speech sounds. However, ANF rate representations may not be as inadequate as they appear. Here, it is investigated whether some of this apparent inadequacy owes to the mismatch between animal and human cochlear characteristics. For all animal models tested in earlier studies, the basilar membrane is shorter and encompasses a broader range of frequencies than that of humans. In this study, a customized speech synthesizer was used to create a rendition of the vowel [E] with formant spacing and bandwidths that fit the cat cochlea in proportion to the human cochlea. In these vowels, the spectral envelope is matched to cochlear distance rather than to frequency. Recordings of responses to this cochlear normalized [E] in auditory-nerve fibers of cats demonstrate that rate-based encoding of vowel sounds is capable of distinguishing spectral prominences even at 70-80-dB SPL. When cochlear dimensions are taken into account, rate encoding in ANF appears more informative than was previously believed.
Handbook of Psycholinguistics (Second Edition) | 2006
Keith R. Kluender; Michael Kiefte
Publisher Summary Conceptualizing speech perception as a process by which phonemes are retrieved from acoustic signals is tradition. Within this tradition, research in speech perception has been focused often on problems concerning segmentation and lack of invariance. The problem of segmentation refers to the fact that if phonetic units exist, they are not like typed letters on a page. Instead, they overlap extensively in time much like cursive handwriting. The problem of lack of invariance is related to the segmentation problem. Because speech sounds are produced such that articulations for one consonant or vowel overlaps with the production of preceding ones and vice versa, every consonant and vowel produced in fluent connected speech is dramatically colored by its neighbors. Some of the most recalcitrant problems in the study of speech perception are the consequence of adopting discrete phonetic units as a level of analysis, a level that is not discrete and may not be real. In connected speech, acoustic realization of the beginning and end of one word also overlaps with sounds of preceding and following words; hence the problems of invariance and segmentation are not restricted to phonetic units. Speech perception follows a handful of general principles that are implemented in both sophisticated and not-so-sophisticated ways through the chain of processing from periphery through central nervous system.
Journal of the Acoustical Society of America | 2010
Michael Kiefte; Teresa Enright; Lacey Marshall
Although recent evidence reconfirmed the importance of spectral peak frequencies in vowel identification [Kiefte and Kluender (2005). J. Acoust. Soc. Am. 117, 1395-1404], the role of formant amplitude in perception remains somewhat controversial. Although several studies have demonstrated a relationship between vowel perception and formant amplitude, this effect may be a result of basic auditory phenomena such as decreased local spectral contrast and simultaneous masking. This study examines the roles that local spectral contrast and simultaneous masking play in the relationship between the amplitude of spectral peaks and the perception of vowel stimuli. Both full- and incomplete-spectrum stimuli were used in an attempt to separate the effects of local spectral contrast and simultaneous masking. A second experiment was conducted to measure the detectability of the presence/absence of a formant peak to determine to what extent identification data could be predicted from spectral peak audibility alone. Results from both experiments indicate that, while both masking and spectral contrast likely play important roles in vowel perception, additional factors must be considered in order to account for vowel identification data. Systematic differences between the audibility of spectral peaks and predictions of perceived vowel identity were observed.
Acoustics Research Letters Online-arlo | 2002
Michael Kiefte; Keith R. Kluender; William S. Rhode
Behavioral or neural measures of speech encoding are often taken from animals with auditory systems that differ substantially from those of humans. Absolute distance between spectral peaks of speech sounds along the basilar membrane is typically much greater in humans than in smaller animals. To address this problem, a synthesizer was developed for creating speech scaled for nonhuman cochleae in which spectra are warped to account for differences in cochlear physiology. Absolute cochlear distance between spectral peaks (formants) is held constant across species while formant band- widths are also scaled to span equal numbers of hair cells as for humans. This was accomplished via a flexible reimplementation of the KLATT80 speech synthesizer in MATLAB.