Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Hewitt is active.

Publication


Featured researches published by Michael J. Hewitt.


Journal of the Acoustical Society of America | 1991

Virtual pitch and phase sensitivity of a computer model of the auditory periphery. I: Pitch identification

Ray Meddis; Michael J. Hewitt

Licklider made his original suggestion in an attempt to explain the human ability to perceive the pitch of a complex tone even though that tone contained no spectral component corresponding to that pitch. He rejected the prevailing theory (Fletcher, 1924) that distortion products of nonlinear cochlear responses could wholly explain the phenomenon. He pointed to the fact that the waveform envelope of unresolved harmonic components could be used to extract pitch information if an autocorrelation analysis could be performed. He thought that this might be achieved by a delay line mechanism at a low level in the auditory nervous system. His theory depended on the idea that the harmonic com


Journal of the Acoustical Society of America | 1992

Modeling the identification of concurrent vowels with different fundamental frequencies.

Ray Meddis; Michael J. Hewitt

Human listeners are better able to identify two simultaneous vowels if the fundamental frequencies of the vowels are different. A computational model is presented which, for the first time, is able to simulate this phenomenon at least qualitatively. The first stage of the model is based upon a bank of bandpass filters and inner hair-cell simulators that simulate approximately the most relevant characteristics of the human auditory periphery. The output of each filter/hair-cell channel is then autocorrelated to extract pitch and timbre information. The pooled autocorrelation function (ACF) based on all channels is used to derive a pitch estimate for one of the component vowels from a signal composed of two vowels. Individual channel ACFs showing a pitch peak at this value are combined and used to identify the first vowel using a template matching procedure. The ACFs in the remaining channels are then combined and used to identify the second vowel. Model recognition performance shows a rapid improvement in correct vowel identification as the difference between the fundamental frequencies of two simultaneous vowels increases from zero to one semitone in a manner closely resembling human performance. As this difference increases up to four semitones, performance improves further only slowly, if at all.


Journal of the Acoustical Society of America | 1990

Implementation details of a computation model of the inner hair‐cell auditory‐nerve synapse

Ray Meddis; Michael J. Hewitt; Trevor M. Shackleton

A simple and computationally efficient model of auditory-neural transduction at the inner hair cell has recently been described, (Meddis, 1986a nd 1988). This paper briefly presents a short computer program to implement the model, an exploration of the effects of modifying the parameters of the model, anew set of parameters for simulating an auditory nerve fiber showing amedium rate of spontaneous activity with extended ynamic range, and some methods of quickly estimating some of the characteristics. It is intended as advice for researchers who wish to implement he model as part of a speech recognition device or as input to another model of more centrally located neurophysiological functions.


Journal of the Acoustical Society of America | 1994

A computer model of amplitude‐modulation sensitivity of single units in the inferior colliculus

Michael J. Hewitt; Ray Meddis

A computer model is presented of a neural circuit that replicates amplitude-modulation (AM) sensitivity of cells in the central nucleus of the inferior colliculus (ICC). The ICC cell is modeled as a point neuron whose input consists of spike trains from a number of simulated ventral cochlear nucleus (VCN) chopper cells. Input to the VCN chopper cells is provided by simulated spike trains from a model of the auditory periphery [Hewitt et al., J. Acoust. Soc. Am. 91, 2096-2109 (1992)]. The performance of the model at the output of the auditory nerve, the cochlear nucleus and ICC simulations in response to amplitude-modulated stimuli is described. The results are presented in terms of both temporal and rate modulation transfer functions (MTFs) and compared with data from physiological studies in the literature. Qualitative matches were obtained to the following main empirical findings: (a) Auditory nerve temporal-MTFs are low pass, (b) VCN chopper temporal-MTFs are low pass at low signal levels and bandpass at moderate and high signal levels, (c) ICC unit temporal-MTFs are low pass at low signal levels and broadly tuned bandpass at moderate and high signal levels, and (d) ICC unit rate-MTFs are sharply tuned bandpass at low and moderate signal levels and flat at high levels. VCN and ICC units preferentially sensitive to different rates of modulation are presented. The model supports the hypothesis that cells in the ICC decode temporal information into a rate code [Langner and Schreiner, J. Neurophysiol. 60, 1799-1822 (1988)], and provides a candidate wiring diagram of how this may be achieved.


Journal of the Acoustical Society of America | 1991

Virtual pitch and phase sensitivity of a computer model of the auditory periphery. II: Phase sensitivity

Ray Meddis; Michael J. Hewitt

In a companion article [Meddis and Hewitt, J. Acoust. Soc. Am. 89, 2866–2882 (1991)] it was shown that a computational model of the auditory periphery followed by a system of autocorrelation analyses was able to account for a wide range of human virtual pitch perception phenomena. In this article it is shown that the same model, with no substantial modification, can predict a number of results concerning human sensitivity to phase relationships among harmonic components of tone complexes. The model is successfully evaluated using (a) amplitude-modulated and quasifrequency-modulated stimuli, (b) harmonic complexes with alternating phase change and monotonic phase change across harmonic components, and (c) mistuned harmonics. The model is contrasted with phase-insensitive theories of low-level auditory processing and offered as further evidence in favor of the value of analysing time intervals among spikes in the auditory nerve when explaining psychophysical phenomena.


Journal of the Acoustical Society of America | 1992

Across frequency integration in a model of lateralization

Trevor M. Shackleton; Ray Meddis; Michael J. Hewitt

A computational model of binaural lateralization is described. An accurate model of the auditory periphery feeds a tonotopically organized multichannel cross‐correlation mechanism. Lateralization predictions are made on the basis of the integrated activity across frequency channels. The model explicitly weights cross‐correlation peaks closer to the center preferentially, and effectively weights information that is consistent across frequencies more heavily because they have a greater impact in the across frequency integration. This model is complementary to the weighted‐image model of Stern et al. [J. Acoust. Soc. Am. 84, 156–165 (1988)], although the model described in this paper is physiologically more plausible, is simpler, and is more versatile in the range of input stimuli that are possible.


Journal of the Acoustical Society of America | 1991

An evaluation of eight computer models of mammalian inner hair‐cell function

Michael J. Hewitt; Ray Meddis

Eight computer models of auditory inner hair cells have been evaluated. From an extensive literature on mammalian species, a subset of well-reported auditory-nerve properties in response to tone-burst stimuli were selected and tested for in the models. This subset included tests for: (a) rate-level functions for onset and steady-state responses; (b) two-component adaptation; (c) recovery of spontaneous activity; (d) physiological forward masking; (e) additivity; and (f) frequency-limited phase locking. As models of hair-cell functioning are increasingly used as the front end of speech-recognition devices, the computational efficiency of each model was also considered. The evaluation shows that no single model completely replicates the subset of tests. Reasons are given for our favoring the Meddis model [R. Meddis, J. Acoust. Soc. Am. 83, 1056-1063 (1988)] both in terms of its good agreement with physiological data and its computational efficiency. It is concluded that this model is well suited to provide the primary input to speech recognition devices and models of central auditory processing.


Journal of the Acoustical Society of America | 1993

Regularity of cochlear nucleus stellate cells: A computational modeling study

Michael J. Hewitt; Ray Meddis

This article reports on a computational modeling study designed to investigate the generation of the transient chopper response of cochlear nucleus stellate cells. The model is based on a simulation of the auditory periphery which feeds a generic stellate-cell model. Physiological recordings of transient chopper units in response to short, best frequency, tone bursts show a brief initial period (typically < 10 ms) of rapid rate adaptation as evidenced by a rapid rise in mean interspike interval. Associated with this rate adaptation is a significant increase in firing irregularity. The changes in rate and irregularity have recently been attributed to the activation of noisy inhibitory inputs on the cell [e.g., Banks and Sachs, J. Neurophysiol. 65, 606-629 (1991)]. However, the results show that the transient chopper response pattern can be generated without the need for inhibitory inputs. The transience of the initial chopping pattern is sensitive to the following model parameters: (a) the firing threshold of the cell, (b) the number of excitatory inputs that converge on the cell, and (c) the magnitude of the current delivered to the cell for each active input. The response was also found to be relatively insensitive to changes in the degree of dendritic filtering imposed on the auditory-nerve input. The results of each simulation can be explained by considering the pattern of depolarization the cell receives during the course of a tone burst.


Quarterly Journal of Experimental Psychology | 1994

The role of binaural and fundamental frequency difference cues in the identification of concurrently presented vowels

Trevor M. Shackleton; Ray Meddis; Michael J. Hewitt

The relative importance of voice pitch and interaural difference cues in facilitating the recognition of both of two concurrently presented synthetic vowels was measured. The interaural difference cues used were an interaural time difference (400 μsec ITD), two magnitudes of interaural level difference (15 dB and infinite ILD), and a combination of ITD and ILD (400 μsec plus 15 dB). The results are analysed separately for those cases where both vowels are identical and those where they are different. When the two vowels are different, a voice pitch difference of one semitone is found to improve the percentage of correct reports of both vowels by 35.8% on average. However, the use of interaural difference cues results in an improvement of 11.5% on average when there is a voice pitch difference of one semitone, but only a non-significant 0.1% when there is no voice pitch difference. When the two vowels are identical, imposition of either a voice pitch difference or binaural difference reduces performance, in a subtractive manner. It is argued that the smaller size of the interaural difference effect is not due to a “ceiling effect” but is characteristic of the relative importance of the two kinds of cues in this type of experiment. The possibility that the improvement due to interaural difference cues may in fact be due to monaural processing is discussed. A control experiment is reported for the ITD condition, which suggests binaural processing does occur for this condition. However, it is not certain whether the improvement in the ILD condition is due to binaural processing or use of the improvement in signal-to-noise ratio for a single vowel at each ear.


Archive | 1990

Non-Linearity in a Computational Model of the Response of the Basilar Membrane

Ray Meddis; Michael J. Hewitt; Trevor M. Shackleton

We are interested in modelling human response to speech signals with particular reference to selective attention. We intend that our models should use principles derived from anatomy and physiology as far as possible. Accordingly, a balance has to be struck hetween available computational power and the need to model all the important subsidiary processes such as middle- and outer- ear effects, basilar membrane response, inner hair-cell response, interactions among neurons in the bminstem nuclei, etc .. Despite the computational overhead of taking it into account. the nonlinearity of the response of the inner ear to sound is important to an understanding of human speech processing for at least two reasons. Firstly, we know that distortion products influence pitch perception - the so-called second effect of pitch shift (Schouten e/ al., 1962) - and pitch is an important element in the segregation of sounds. Secondly, the nonlinear response of the cochlea has important implications for the representation of speech sounds (Miller and Sachs, 1983).

Collaboration


Dive into the Michael J. Hewitt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge