Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian C. J. Moore is active.

Publication


Featured researches published by Brian C. J. Moore.


Hearing Research | 1990

Derivation of auditory filter shapes from notched-noise data

Brian R. Glasberg; Brian C. J. Moore

A well established method for estimating the shape of the auditory filter is based on the measurement of the threshold of a sinusoidal signal in a notched-noise masker, as a function of notch width. To measure the asymmetry of the filter, the notch has to be placed both symmetrically and asymmetrically about the signal frequency. In previous work several simplifying assumptions and approximations were made in deriving auditory filter shapes from the data. In this paper we describe modifications to the fitting procedure which allow more accurate derivations. These include: 1) taking into account changes in filter bandwidth with centre frequency when allowing for the effects of off-frequency listening; 2) correcting for the non-flat frequency response of the earphone; 3) correcting for the transmission characteristics of the outer and middle ear; 4) limiting the amount by which the centre frequency of the filter can shift in order to maximise the signal-to-masker ratio. In many cases, these modifications result in only small changes to the derived filter shape. However, at very high and very low centre frequencies and for hearing-impaired subjects the differences can be substantial. It is also shown that filter shapes derived from data where the notch is always placed symmetrically about the signal frequency can be seriously in error when the underlying filter is markedly asymmetric. New formulae are suggested describing the variation of the auditory filter with frequency and level. The implication of the results for the calculation of excitation patterns are discussed and a modified procedure is proposed. The appendix list FORTRAN computer programs for deriving auditory filter shapes from notched-noise data and for calculating excitation patterns. The first program can readily be modified so as to derive auditory filter shapes from data obtained with other types of maskers, such as rippled noise.


Journal of the Acoustical Society of America | 1983

Suggested formulae for calculating auditory-filter bandwidths and excitation patterns

Brian C. J. Moore; Brian R. Glasberg

Recent estimates of auditory-filter shape are used to derive a simple formula relating the equivalent rectangular bandwidth (ERB) of the auditory filter to center frequency. The value of the auditory-filter bandwidth continues to decrease as center frequency decreases below 500 Hz. A formula is also given relating ERB-rate to frequency. Finally, a method is described for calculating excitation patterns from filter shapes.


Proceedings of the National Academy of Sciences of the United States of America | 2006

Speech perception problems of the hearing impaired reflect inability to use temporal fine structure

Christian Lorenzi; Gaëtan Gilbert; Héloïse Carn; Stéphane Garnier; Brian C. J. Moore

People with sensorineural hearing loss have difficulty understanding speech, especially when background sounds are present. A reduction in the ability to resolve the frequency components of complex sounds is one factor contributing to this difficulty. Here, we show that a reduced ability to process the temporal fine structure of sounds plays an important role. Speech sounds were processed by filtering them into 16 adjacent frequency bands. The signal in each band was processed by using the Hilbert transform so as to preserve either the envelope (E, the relatively slow variations in amplitude over time) or the temporal fine structure (TFS, the rapid oscillations with rate close to the center frequency of the band). The band signals were then recombined and the stimuli were presented to subjects for identification. After training, normal-hearing subjects scored perfectly with unprocessed speech, and were ≈90% correct with E and TFS speech. Both young and elderly subjects with moderate flat hearing loss performed almost as well as normal with unprocessed and E speech but performed very poorly with TFS speech, indicating a greatly reduced ability to use TFS. For the younger hearing-impaired group, TFS scores were highly correlated with the ability to take advantage of temporal dips in a background noise when identifying unprocessed speech. The results suggest that the ability to use TFS may be critical for “listening in the background dips.” TFS stimuli may be useful in evaluating impaired hearing and in guiding the design of hearing aids and cochlear implants.


British Journal of Audiology | 2000

A test for the diagnosis of dead regions in the cochlea.

Brian C. J. Moore; Martina Huss; Deborah A. Vickers; Brian R. Glasberg; Joseph I. Alcantara

Abstract Hearing impairment may sometimes be associated with complete loss of inner hair cells (IHCs) over a certain region of the basilar membrane. We call this a ‘dead region’. Amplification (using a hearing aid) over a frequency range corresponding to a dead region may not be beneficial and may even impair speech intelligibility. However, diagnosis of dead regions is not easily done from the audiogram. This paper reports the design and evaluation of a method for detecting and delimiting dead regions. A noise, called ‘threshold equalizing noise’ (TEN), was spectrally shaped so that, for normally hearing subjects, it would give equal masked thresholds for pure tone signals at all frequencies within the range 250–10 000 Hz. Its level is specified as the level in a one-ERB (132 Hz) wide band centred at 1000 Hz. Measurements obtained from 22 normal-hearing subjects and TEN levels of 30, 50 and 70 dB/ERB confirmed that the signal level at masked threshold was approximately equal to the noise level/ERB and was almost independent of signal frequency. Masked thresholds were measured for 20 ears of 14 subjects with sensorineural hearing loss, using TEN levels of 30, 50 and 70 dB/ERB. Psychophysical tuning curves (PTCs) were measured for the same subjects. When there are surviving IHCs corresponding to a frequency region with elevated absolute thresholds, a signal in that frequency region is detected via IHCs with characteristic frequencies (CFs) close to that region. In such a case, threshold in the TEN is close to that for normal-hearing listeners, provided that the noise intensity is sufficient to produce significant masking. Also, the tip of the PTC lies close to the signal frequency. When a dead region is present, the signal is detected via IHCs with CFs different from that of the signal frequency. In such a case, threshold in the TEN is markedly higher than normal, and the tip of the PTC is shifted away from the signal frequency. Generally, there was a very good correspondence between the results obtained using the TEN and the PTCs. We conclude that the measurement of masked thresholds in TEN provides a quick and simple method for the diagnosis of dead regions.


Journal of the Acoustical Society of America | 1986

Auditory filter shapes in subjects with unilateral and bilateral cochlear impairments

Brian R. Glasberg; Brian C. J. Moore

The shape of the auditory filter was estimated at three center frequencies, 0.5, 1.0, and 2.0 kHz, for five subjects with unilateral cochlear impairments. Additional measurements were made at 1.0 kHz using one subject with a unilateral impairment and six subjects with bilateral impairments. Subjects were chosen who had thresholds in the impaired ears which were relatively flat as a function of frequency and ranged from 15 to 70 dB HL. The filter shapes were estimated by measuring thresholds for sinusoidal signals (frequency f) in the presence of two bands of noise, 0.4 f wide, one above and one below f. The spectrum level of the noise was 50 dB (re: 20 mu Pa) and the noise bands were placed both symmetrically and asymmetrically about the signal frequency. The deviation of the nearer edge of each noise band from f varied from 0.0 to 0.8 f. For the normal ears, the filters were markedly asymmetric for center frequencies of 1.0 and 2.0 kHz, the high-frequency branch being steeper. At 0.5 kHz, the filters were more symmetric. For the impaired ears, the filter shapes varied considerably from one subject to another. For most subjects, the lower branch of the filter was much less steep than normal. The upper branch was often less steep than normal, but a few subjects showed a near normal upper branch. For the subjects with unilateral impairments, the equivalent rectangular bandwidth of the filter was always greater for the impaired ear than for the normal ear at each center frequency. For three subjects at 0.5 kHz and one subject at 1.0 kHz, the filter had too little selectivity for its shape to be determined.


Journal of the Acoustical Society of America | 1973

Frequency difference limens for short‐duration tones

Brian C. J. Moore

Models which attempt to account for our ability to discriminate the pitch of pure tones are discussed. It is concluded that models based on a place (spectral) analysis should be subject to a limitation of the type Δf⋅d ⩾ constant, where Δf is the frequency difference limen (DL) for a tone pulse of duration d. The value of this constant will depend on the ability of the system to resolve small intensity differences. If a resolution of 1 dB is assumed, the value of the constant is about 0.24. In principle, a mechanism based on the measurement of time intervals could do considerably better than this. Frequency DLs were measured over a wide range of frequencies and durations. It was found that at short durations the product of Δf and d was about one order of magnitude smaller than the minimum predicted from the place model, except for frequencies above 5 kHz. A “kink” in the obtained functions was also observed at about 5 kHz. It is concluded that the evidence is consistent with the operation of a time‐measuring mechanism for frequencies below 5 kHz, and with a spectral or place mechanism for frequencies above this.


Hearing Research | 1987

Formulae describing frequency selectivity as a function of frequency and level, and their use in calculating excitation patterns

Brian C. J. Moore; Brian R. Glasberg

The auditory filter may be considered as a weighting function representing frequency selectivity at a particular centre frequency. Its shape can be derived using the power-spectrum model of masking which assumes: (1) in detecting a signal in a masker the observer uses the single auditory filter giving the highest signal-to-masker ratio; (2) threshold corresponds to a fixed signal-to-masker ratio at the output of that filter. Factors influencing the choice of a masker to measure the auditory filter shape are discussed. Narrow-band maskers are unsuitable for this purpose, since they violate the assumptions of the power-spectrum model. A method using a notched-noise masker is recommended, and typical results using that method are presented. The variation of the auditory filter shape with centre frequency and with level, and the relationship of the auditory filter shape and the excitation pattern are described. A method of calculating the excitation pattern of any sound as a function of level is presented, and examples and applications are given. The appendix gives a Fortran program for calculating excitation patterns.


Journal of the Acoustical Society of America | 1997

Speech reception thresholds in noise with and without spectral and temporal dips for hearing-impaired and normally hearing people

Robert W. Peters; Brian C. J. Moore; Thomas Baer

People with cochlear hearing loss often have considerable difficulty in understanding speech in the presence of background sounds. In this paper the relative importance of spectral and temporal dips in the background sounds is quantified by varying the degree to which they contain such dips. Speech reception thresholds in a 65-dB SPL noise were measured for four groups of subjects: (a) young with normal hearing; (b) elderly with near-normal hearing; (c) young with moderate to severe cochlear hearing loss; and (d) elderly with moderate to severe cochlear hearing loss. The results indicate that both spectral and temporal dips are important. In a background that contained both spectral and temporal dips, groups (c) and (d) performed much more poorly than group (a). The signal-to-background ratio required for 50% intelligibility was about 19 dB higher for group (d) than for group (a). Young hearing-impaired subjects showed a slightly smaller deficit, but still a substantial one. Linear amplification combined ...


Journal of the Acoustical Society of America | 1988

The shape of the ear’s temporal window

Brian C. J. Moore; Brian R. Glasberg; Christopher J. Plack; A. K. Biswas

This article examines the idea that the temporal resolution of the auditory system can be modeled using a temporal window (an intensity weighting function) analogous to the auditory filter measured in the frequency domain. To estimate the shape of the hypothetical temporal window, threshold was measured for a brief sinusoidal signal presented in a temporal gap between two bursts of noise. The duration of the gap was systematically varied and the signal was placed both symmetrically and asymmetrically within the gap. The data were analyzed by assuming that the temporal window had the form of a simple mathematical expression with a small number of free parameters. The values of the parameters were adjusted to give the best fit to the data. The analysis assumed that, for each condition, the temporal window was centered at the time giving the highest signal-to-masker ratio, and that threshold corresponded to a fixed ratio of signal energy to masker energy at the output of the window. The data were fitted well by modeling each side of the window as the sum of two rounded-exponential functions. The window was highly asymmetric, having a shallower slope for times before the center than for times after. The equivalent rectangular duration (ERD) of the window was typically about 8 ms. The ERD increased slightly when the masker level was decreased, but did not differ significantly for signal frequencies of 500 and 2000 Hz. The temporal-window model successfully accounts for the data from a variety of experiments measuring temporal resolution. However, it fails to predict certain aspects of forward masking and of the detection of amplitude modulation at high rates.


Jaro-journal of The Association for Research in Otolaryngology | 2008

The Role of Temporal Fine Structure Processing in Pitch Perception, Masking, and Speech Perception for Normal-Hearing and Hearing-Impaired People

Brian C. J. Moore

Complex broadband sounds are decomposed by the auditory filters into a series of relatively narrowband signals, each of which can be considered as a slowly varying envelope (E) superimposed on a more rapid temporal fine structure (TFS). Both E and TFS information are represented in the timing of neural discharges, although TFS information as defined here depends on phase locking to individual cycles of the stimulus waveform. This paper reviews the role played by TFS in masking, pitch perception, and speech perception and concludes that cues derived from TFS play an important role for all three. TFS may be especially important for the ability to “listen in the dips” of fluctuating background sounds when detecting nonspeech and speech signals. Evidence is reviewed suggesting that cochlear hearing loss reduces the ability to use TFS cues. The perceptual consequences of this, and reasons why it may happen, are discussed.

Collaboration


Dive into the Brian C. J. Moore's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aleksander Sek

Adam Mickiewicz University in Poznań

View shared research outputs
Top Co-Authors

Avatar

Thomas Baer

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert W. Peters

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Hashir Aazh

Royal Surrey County Hospital

View shared research outputs
Top Co-Authors

Avatar

Robert P. Carlyon

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge