Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph E. Hind is active.

Publication


Featured researches published by Joseph E. Hind.


Journal of the Acoustical Society of America | 1971

Temporal Position of Discharges in Single Auditory Nerve Fibers within the Cycle of a Sine‐Wave Stimulus: Frequency and Intensity Effects

David J. Anderson; Jerzy E. Rose; Joseph E. Hind; John F. Brugge

Period histograms, which display the distribution of spikes throughout the period of a periodic stimulus, were computed for discharges recorded from single auditory nerve fibers of the squirrel monkey when low‐frequency tones were employed. For frequencies up to about 4 kHz, where phase locking is observed, the average phase angle of the discharges was tracked as frequency was varied in small steps from low to high. To a first approximation, the cumulative shift in average phase angle is a linear function of frequency as would be observed for an ideal delay line. The slopes of these phase‐versus‐frequency lines were found to be related through a power law to the best frequencies of the fibers. Thus, it seems possible to estimate the travel time of a mechanical disturbance between the oval window and any point on the cochlear partition. A more detailed examination revealed that average phase angle is also sensitive to intensity, depending upon the relation of the stimulating frequency to the best frequency...


Journal of the Acoustical Society of America | 1990

Direction‐dependent spectral properties of cat external ear: New data and cross‐species comparisons

Alan D. Musicant; Joseph C. K. Chan; Joseph E. Hind

Free-field to eardrum transfer functions were measured in anesthetized cats inside an anechoic chamber. Direction-dependent transformations were determined by measurement of sound-pressure levels using a small probe tube microphone surgically implanted in a ventral position near the tympanic membrane. Loudspeaker and probe microphone characteristics were eliminated by subtraction of the signal recorded in the free field with no animal present. Complexities of the transfer function, which include the presence of prominent spectral notches in the 8- to 18-kHz frequency region, are due primarily to the acoustical properties of the pinna. Differential amplification of frequency components within the broadband stimulus occurs as a function of source direction. Spectral features vary systematically with changes in both elevation (EL) and azimuth (AZ). The contrast between a notch and its shoulders is enhanced in the interaural spectral records. Spectral data from single source locations and spatial data for single frequencies at many locations are presented and comparisons with other species are drawn. It is suggested that spectral features in the 8- to 18-kHz region provide some of the necessary spectral information for sound localization and that the contrast in spectral energy between the frequencies at the notch and its shoulders is a potential directional cue.


The Journal of Neuroscience | 1996

The Structure of Spatial Receptive Fields of Neurons in Primary Auditory Cortex of the Cat

John F. Brugge; Richard A. Reale; Joseph E. Hind

Transient broad-band stimuli that mimic in their spectrum and time waveform sounds arriving from a speaker in free space were delivered to the tympanic membranes of barbiturized cats via sealed and calibrated earphones. The full array of such signals constitutes a virtual acoustic space (VAS). The extracellular response to a single stimulus at each VAS direction, consisting of one or a few precisely time-locked spikes, was recorded from neurons in primary auditory cortex. Effective sound directions form a virtual space receptive field (VSRF). Near threshold, most VSRFs were confined to one quadrant of acoustic space and were located on or near the acoustic axis. Generally, VSRFs expanded monotonically with increases in stimulus intensity, with some occupying essentially all of the acoustic space. The VSRF was not homogeneous with respect to spike timing or firing strength. Typically, onset latency varied by as much as 4–5 msec across the VSRF. A substantial proportion of recorded cells exhibited a gradient of first-spike latency within the VSRF. Shortest latencies occupied a core of the VSRF, on or near the acoustic axis, with longer latency being represented progressively at directions more distant from the core. Remaining cells had VSRFs that exhibited no such gradient. The distribution of firing probability was mapped in those experiments in which multiple trials were carried out at each direction. For some cells there was a positive correlation between latency and firing probability.


Journal of the Acoustical Society of America | 1980

Interaural time differences: Implications regarding the neurophysiology of sound localization

G. Linn Roth; Ravindra K. Kochhar; Joseph E. Hind

Interaural time differences (ITDs) were measured from 400--7000 Hz on cats in order to provide quantitative data for use in physiological/behavioral studies on sound localization. ITDs derived from clicks and the initial portion of tone bursts showed a pronounced roughness and frequency dependence. This frequency dependence is most evident at higher angles of incidence and indicates that a single ITD will not always represent a single position on the azimuth. Controls demonstrate that most of the roughness in these functions was due to reflections off the surface supporting the animal and that the measured ITDs corresponded to predictions made by steady-state theory. Measurements made with and without the pinnae in position indicate that they have relatively little effect on these ITD functions, particularly for frequencies below 2500 Hz and for small angles of incidence. In spite of acoustic limitations exemplified by the roughness and frequency dependence of these functions, ITDs generated by sound sources situated close to the midline provide reliable localization cues that are much better than those derived from sources well out on the azimuth. Finally, it is noted that another ITD, the group ITD, can be ascribed to an acoustic signal. Calculations based on the measured steady-state ITDs show differences between the group and steady-state ITDs over a given range of frequencies. Differences between the group and steady-state ITD can be significant, and it is argued that: (1) The group ITD can provide a localization cue to the auditory system that is distinct from the steady-state ITD; and (2) it is possible these group ITDs are used by the nervous system to localize sound sources in realistic situations.


Journal of the Acoustical Society of America | 1976

Correlates of combination tones observed in the response of neurons in the anteroventral cochlear nucleus of the cat.

Guido F. Smoorenburg; Mary Morton Gibson; Leonard M. Kitzes; Jerzy E. Rose; Joseph E. Hind

Neurons in the anteroventral cochlear nucleus of the cat respond to combination tones of the forms f2−f1 and f1−n (f2−f1), where n is a small positive integer 1, 2, 3,.... The most easily observed combination tones are f2−f1 and 2f1−f2. In general, a combination tone is effective if three conditions are fulfilled: (1) the combination‐tone frequency must fall within the pure‐tone response area of the neuron; (2) the intensity levels of the primaries must be appropriate; and (3) the separation of the primary frequencies cannot be unduly large. For any form of combination tone, a combination‐tone response area could be plotted by fixing f1 at some level and varying f2 in small steps. The actual frequency of the combination tone could be determined from the timing of the discharges for all neurons whose discharges are phase locked. The combination‐tone response areas indicate that the response to a given form of combination tone is optimal when the combination frequency is at or near the best frequency of the...


Journal of the Acoustical Society of America | 1993

An insert earphone system for delivery of spectrally shaped signals for physiological studies

Joseph C. K. Chan; Alan D. Musicant; Joseph E. Hind

Acoustic signals arriving at the eardrum in free-space carry directionally dependent temporal and spectral information resulting from the acoustical effects of the body, head, and external ear as well as from differences in the length of the sound path to each ear. Through analysis of the responses of single auditory neurons, the acoustical and neural mechanisms by which sounds in free-space are localized are being studied. The approach involves simulation of free-field signals at the two eardrums of a cat via earphones and a study of the neuronal responses to such a virtual acoustic space. This approach makes it possible to manipulate different stimulus parameters independently in order to examine their role in determining the spatial characteristics of neuronal response. This report describes an insert earphone system designed for the delivery of such simulated signals which are broadband transients having complex spectra that mimic the acoustic transfer function of the external ear for frequency components up to 30 kHz or more.


Archive | 1996

An Implementation of Virtual Acoustic Space for Neurophysiological Studies of Directional Hearing

Richard A. Reale; Jiashu Chen; Joseph E. Hind; John F. Brugge

Sound produced by a free-field source and recorded near the cat’s eardrum has been transformed by a direction-dependent ‘Free-field-to-Eardrum Transfer Function’ (FETF) or, in the parlance of human psychophysics, a ‘Head-Related-Transfer-Function’ (HRTF). We prefer to use here the term FETF since, for the cat at least, the function includes significant filtering by structures in addition to the head. This function preserves direction-dependent spectral features of the incident sound that, together with interaural time and interaural level differences, are believed to provide the important cues used by a listener in localizing the source of a sound in space. The set of FETFs representing acoustic space for one subject is referred to as a ‘Virtual Acoustic Space’ (VAS). This term applies because these functions can be used to synthesize accurate replications of the signals near the eardrums for any sound-source direction contained in the set.1–6 The combination of VAS and earphone delivery of synthesized signals is proving to be a powerful tool to study parametrically the mechanisms of directional hearing. This approach enables the experimenter to control dichotically with earphones each of the important acoustic cues resulting from a free-field sound source while obviating the physical problems associated with a moveable loudspeaker or an array of speakers.


Archive | 1997

Spatial Receptive Fields of Single Neurons of Primary Auditory Cortex of the Cat

John F. Brugge; Richard A. Reale; Joseph E. Hind

Neurons in the primary auditory cortical field (AI) have been shown to be sensitive to the direction of a sound when the source is either in an anechoic free field (Middle- brooks et al, 1980; Rajan et al, 1990; Imig et al, 1990) or in anechoic virtual acoustic space (Brugge et al, 1994; 1996a,b). The spatial receptive fields obtained under these stimulus conditions are typically large in size at suprathreshold levels, often exceeding an acoustic hemifield; close to threshold their centers tend to lie on or near the acoustic axis. How large receptive fields centered around the acoustic axis enable AI neurons to encode information about sound direction is not well understood, although it would appear that the time structure of the neuronal discharge within the receptive field plays a role (Mid- dlebrooks et al, 1994; Brugge et al, 1996). In this paper we review and extend our findings on directional sensitivity of isolated AI neurons to transient sound, employing conventional extracellular recording methods (Brugge et al, 1994,1996a) and a technique by which synthesized signals that mimic sounds coming from particular directions in space are delivered at the eardrums of Nembutal-anesthetized cats through a sealed and calibrated sound delivery system (Chan et al, 1993; Reale et al, 1996).


Archive | 1998

Spatial Receptive Field Properties of Primary Auditory Cortical Neurons

Richard A. Reale; John F. Brugge; Joseph E. Hind

The brain of a listener must compute the direction of a sound source that originates in free space by using the acoustic features present in the pressure waves that reach each eardrum. Because the two ears are physically separated, sound arriving from a source in space may exhibit interaural time (ITD) and intensity (IID) differences. The magnitude of these localization cues depends on the incident angle of the sound and its frequency composition.


Journal of the Acoustical Society of America | 1967

Responses of Single Fibers in the Auditory Nerve of the Squirrel Monkey to Paired Phase‐Locked Low‐Frequency Tones

John F. Brugge; David J. Anderson; Joseph E. Hind; Jerzy E. Rose

Data were analyzed on‐line and from analog tape using a LINC computer. Stimuli were 10‐sec complex sounds generated by summing two phase‐locked low‐frequency sinusoids. Both frequencies were within the response area of the unit and were related in ratios of small integers. Periodograms, i.e., distributions of unit responses over one period of the complex waveform summed for all repetitions of the waveform stimulus, were studied. The findings suggest that units discharge preferentially at times when the displacements in one direction of the cochlear partition are at or near maximal values. When intensity or phase of either stimulus component is varied with a resulting change in the complex waveform, there is a corresponding change in the periodogram. Concurrently, interspike intervals are grouped around values which are integral multiples of the time between the peaks of the periodogram, and the frequency of occurrence of such intervals is a function of the amplitude of these peaks (NB‐06225).

Collaboration


Dive into the Joseph E. Hind's collaboration.

Top Co-Authors

Avatar

John F. Brugge

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Reale

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Joseph C. K. Chan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerzy E. Rose

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin J. Akerman

Cold Spring Harbor Laboratory

View shared research outputs
Top Co-Authors

Avatar

D. J. Anderson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge