Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin F. McKinney.
Journal of the Acoustical Society of America | 1999
Martin F. McKinney; Bertrand Delgutte
Although the physical octave is defined as a simple ratio of 2:1, listeners prefer slightly greater octave ratios. Ohgushi [J. Acoust. Soc. Am. 73, 1694-1700 (1983)] suggested that a temporal model for octave matching would predict this octave enlargement effect because, in response to pure tones, auditory-nerve interspike intervals are slightly larger than the stimulus period. In an effort to test Ohgushis hypothesis, auditory-nerve single-unit responses to pure-tone stimuli were collected from Dial-anesthetized cats. It was found that although interspike interval distributions show clear phase-locking to the stimulus, intervals systematically deviate from integer multiples of the stimulus period. Due to refractory effects, intervals smaller than 5 msec are slightly larger than the stimulus period and deviate most for small intervals. On the other hand, first-order intervals are smaller than the stimulus period for stimulus frequencies less than 500 Hz. It is shown that this deviation is the combined effect of phase-locking and multiple spikes within one stimulus period. A model for octave matching was implemented which compares frequency estimates of two tones based on their interspike interval distributions. The model quantitatively predicts the octave enlargement effect. These results are consistent with the idea that musical pitch is derived from auditory-nerve interspike interval distributions.
international conference on acoustics, speech, and signal processing | 2010
JuanJuan Xiang; Martin F. McKinney; Kelly Fitz; Tao Zhang
Automatic program switching has been shown to be greatly beneficial for hearing aid users. This feature is mediated by a sound classification system, which is traditionally implemented using simple features and heuristic classification schemes, resulting in an unsatisfactory performance in complex auditory scenarios. In this study, a number of experiments are conducted to systematically assess the impact of more sophisticated classifiers and features on automatic acoustic environment classification performance. The results show that advanced classifiers, such as Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM), greatly improve classification performance over simple classifiers. This change does not require a great increase of computational complexity, provided that a suitable number (5 to 7) of low-level features are carefully chosen. These findings indicate that advanced classifiers can be feasible in hearing aid applications.
international conference on acoustics, speech, and signal processing | 2012
Srikanth Vishnubhotla; Jinjun Xiao; Buye Xu; Martin F. McKinney; Tao Zhang
Perceptual annoyance of environmental sounds is measured for normal-hearing and hearing-impaired listeners under iso-level and iso-loudness conditions. Data from the hearing-impaired listeners shows similar trends to that from normal-hearing listeners, but with greater variability across individuals. A regression model based on the statistics of specific loudness and other perceptual features is fit to the data from the normal-hearing listeners, and is used to predict annoyance for the hearing-impaired listeners. Differences across the subject populations are discussed.
Journal of the Acoustical Society of America | 2012
Srikanth Vishnubhotla; Jinjun Xiao; Buye Xu; Martin F. McKinney; Tao Zhang
Annoyance perception of environmental noises is an important topic in many fields, including transportation, environmental studies and hearing aid design. While annoyance perception of normal hearing (NH) listeners has been studied extensively (e.g. Fastl & Zwicker 2006, Versfeld & Vos 1997, Alayrac et al 2010, Palmer et al 2006), data on annoyance perception of hearing impaired (HI) listeners is scant. In this study, we investigate the annoyance perception of typical environmental noises by both NH and HI listeners, using listeners with unilateral hearing loss. We use the magnitude estimation procedure to obtain the annoyance ratings and a paired-comparison method to obtain the annoyance preference for different environmental noises. The experimental data are analyzed to reveal the underlying dimensions of annoyance perception and differences between NH and HI listeners. A functional model for annoyance perception is developed for both HI and NH listeners. Finally, potential applications of our results are discussed in the context of hearing aid design.
Journal of the Acoustical Society of America | 2010
Kelly Fitz; Martin F. McKinney
Historically, the primary focus of hearing aid development has been on improving speech perception for those with hearing loss. Modern‐day hearing‐aid wearers, however, face many different types of acoustic signals, such as music, that require different types of processing. Music signals differ from speech signals in a variety of fundamental ways, and relevant perceptual information is conveyed via different signal attributes in the two types of signals. The research described here is an effort to improve music perception in listeners with hearing impairment. First, methods have been developed to quantitatively measure deficits in music perception for impaired and aided listeners. Second, specific perceptual features have been evaluated as to their relative importance in the successful perception of music and that information has been used to guide signal processing development. Finally, the relevant perceptual features have been modeled, and the models have been used to evaluate and compare signal proces...
Journal of the Acoustical Society of America | 2009
Kelly Fitz; Matt Burk; Martin F. McKinney
We examine the impact of hearing loss and hearing aid processing on the perception of musical timbre. Our objective is to identify significant timbre cues for hearing‐impaired listeners, and to assess the impact of hearing aid signal processing on timbre perception. Hearing aids perform dynamic, level‐dependent spectrum shaping that may influence listeners’ perception of musical instrument timbres and their ability to discriminate among them. Grey [“Multidimensional perceptual scaling of musical timbres,” J. Acoust. Soc. Am. 61, 1270 (1977)] showed that sustaining instrument tones equalized for level, loudness, and duration are distinguished primarily along three perceptual dimensions that are strongly correlated with the acoustical dimensions of: (1) spectral energy distribution, (2) spectral fluctuation, and (3) precedent high‐frequency, low‐amplitude energy. Following the work of Grey, we ask listeners having mild to moderately severe sensorineural hearing loss to rate pairs of synthetic musical instru...
Journal of the Acoustical Society of America | 2018
Hao Lu; Martin F. McKinney; Tao Zhang; Andrew J. Oxenham
Although beam-forming algorithms for hearing aids can produce gains in target-to-masker, the wearer’s head will not always be facing the target talker, potentially limiting the value of beam-forming in real-world environments, unless eye movements are also accounted for. The aim of this study was to determine the extent to which the head direction and eye gaze track the position of the talker in natural conversational settings. Three groups of participants were recruited: younger listeners, older listeners with clinically normal hearing, and older listeners with mild-to-moderate hearing loss. The experimental set-up included one participant at a time in conversation with two confederates approximately equally spaced around a small round table. Different levels of background noise were introduced by playing background sounds via loudspeakers that surrounded the participants in the conversation. In general, head movements tended to undershoot the position of the current talker, but head and eye movements together generally predicted the current talker position well. Preliminary data revealed no strong effects of hearing loss, or background noise level on the amount of time spent looking at the talker, although younger listeners tended to use their eyes, as opposed to head movements, more than the older listeners. [Work supported by Starkey Laboratories.]Although beam-forming algorithms for hearing aids can produce gains in target-to-masker, the wearer’s head will not always be facing the target talker, potentially limiting the value of beam-forming in real-world environments, unless eye movements are also accounted for. The aim of this study was to determine the extent to which the head direction and eye gaze track the position of the talker in natural conversational settings. Three groups of participants were recruited: younger listeners, older listeners with clinically normal hearing, and older listeners with mild-to-moderate hearing loss. The experimental set-up included one participant at a time in conversation with two confederates approximately equally spaced around a small round table. Different levels of background noise were introduced by playing background sounds via loudspeakers that surrounded the participants in the conversation. In general, head movements tended to undershoot the position of the current talker, but head and eye movements to...
Journal of the Acoustical Society of America | 2013
Martin F. McKinney; Kelly Fitz; Sridhar Kalluri; Brent Edwards
We employ computational models of loudness and pitch perception to better understand the impact of sensorineural hearing loss on music perception, with the aim of guiding technology development for hearing-impaired listeners. Traditionally, hearing aid development has been geared towards improving speech intelligibility and has largely failed to provide adequate restoration of music to those with hearing loss. One difficulty with trying to improve music perception in impaired listeners is the absence of a good quantitative measure of music reception, analogous to speech reception measures like word-recognition rate. Psychoacoustic models for loudness and pitch allow us to guage quantitative parameters relevant to music perception and make predictions about the type of deficits listeners face. We examine the impact of hearing loss to predicted measures of loudness, specific loudness, pitch, and consonance and make suggestions on possible methods for restoration.
Journal of the Acoustical Society of America | 2013
Susie Valentine; Martin F. McKinney; Tao Zhang
For hearing impaired (HI) listeners, it is well known that certain sounds are much more annoying than others even though they may have similar spectral shape and level. For example, HI listeners often report paper rustling noise as highly annoying. A common approach to deal with this complaint is to reduce high frequency gain. While this approach may mitigate the complaint, it can create audibility issues for speech. A more effective approach is to determine the underlying cause of annoyance and then design an algorithm to selectively reduce it. While existing literature on annoyance perception for HI listeners is scant, a previous attempt was made to investigate this perception using real-world recordings (Vishnubhotla, et al, 2012). The study showed a large variability of annoyance ratings across listeners that may have been due to subjective associations with the sound sources. In this study, we use abstract psychoacoustic stimuli designed carefully to avoid possible confounding subjective associations...
Archive | 2010
Juan Juan Xiang; Martin F. McKinney; Kelly Fitz; Tao Zhang