Janine M. Wotton
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Janine M. Wotton.
Journal of the Acoustical Society of America | 1995
Janine M. Wotton; Tim Haresign; James A. Simmons
To measure the directionality of the external ear of the echolocating bat, Eptesicus fuscus, the left or right eardrum of a dead bat was replaced by a microphone which recorded signals received from a sound source that was moved around the stationary head. The test signal was a 0.5-ms FM sweep from 100 kHz to 10 kHz (covering all frequencies in the bats biosonar sounds). Notches and peaks in transfer functions for 7 tested ears varied systematically with changes in elevation. For the most prominent notch, center frequency decreased from about 50 kHz for elevations at or near the horizontal to 30-40 kHz for elevations 30 degrees-40 degrees below the horizontal. A second notch shifted from about 85 kHz to 70 kHz over these same elevations. Above the horizontal, a peak that flanks these notches changed in amplitude by 15 dB with changes in elevation. Removal of the tragus from the external ear disrupted the systematic movement of notch frequencies with elevation but did not disrupt changes in the peaks amplitude. Smaller changes in notch frequency also occurred with changes in azimuth, so monaural notch information alone cannot determine the position of sound sources away from the median plane. However, because bats routinely keep the head pointed at the targets azimuth, median-plane localization occurs with monaural cues delivered to the two ears. Corresponding changes with elevation occurred in the impulse-response, which consists of a series of 3-6 peaks spaced 10-20 microseconds apart. The time separation of two prominent impulse peaks systematically increased from 22-26 microseconds above the horizontal to about 36-40 microseconds below the horizontal, and removal of the tragus disrupted this time shift below the horizontal.
Archive | 1995
James A. Simmons; Michael J. Ferragamo; Prestor A. Saillant; Tim Haresign; Janine M. Wotton; Steven P. Dear; David N. Lee
Echolocation in bats is one of the most demanding adaptations of hearing to be found in any animal. Transforming the information carried by sounds into perceptual images depicting the location and identity of objects rapidly enough to control the decisions and reactions of a swiftly flying bat is a prodigious task for the auditory system to accomplish. The exaggeration of aspects of auditory function to achieve spatial imaging reflects the vital role of hearing in the lives of bats — for finding prey and perceiving obstacles to flight (Neuweiler 1990). It also highlights the mechanisms behind these functions to make echolocation a useful model for studying how the auditory system processes information and creates auditory perceptions in the most extreme circumstances.
Neural Networks | 1995
James A. Simmons; Prestor A. Saillant; Janine M. Wotton; Tim Haresign; Michael J. Ferragamo; Cynthia F. Moss
Abstract Echolocating bats can recognize flying insects as sonar targets in a variety of different acoustic situations ranging from open spaces to dense clutter. Target classification must depend on perceiving images whose dimensions can tolerate intrusion of additional echoes from other objects, even echoes arriving at about the same time as those from the insect, without disrupting image organization. The big brown bat, Eptesicus fuscus, broadcasts FM sonar sounds in the 15–100 kHz band and perceives the arrival-time of echoes with an accuracy of 10–15 ns and a two-point resolution of 2 μs, which suggests that perception of fine detail on the dimension of echo delay or target range is the basis for reconstructing complex acoustic scenes and recognizing targets that are embedded in these scenes. The directionality of the bats sonar sound is very broad, making it impossible to isolate echoes from individual targets merely by aiming the head and ears at one object instead of another. Consequently, segregation of targets must depend on isolating their echoes as discrete events along the axis of delay. That is, the bats images must correspond to impulse responses of target scenes. However, the bats sonar broadcasts are several milliseconds long, and the integration time of echo reception is about 350 μs, so perception of separate delays for multiple echoes only a few microseconds apart requires deconvolution of spectrally-complex echoes that overlap and interfere with each other within the 350-μs integration time. The bats auditory system encodes the FM sweeps of transmissions and echoes as half-wave-rectified, magnitude-unsquared spectrograms, and then registers the time that elapses between each frequency in the broadcast and the echo, effectively correlating the spectrograms. The interference patterns generated by overlap of multiple echoes are then used to modify these delay estimates by adding fine details of the delay structure of echoes. This is equivalent to transformation of the spectrograms into the time domain, or deconvolution of echo spectra by spectrogram correlation and transformation (SCAT). However, while deconvolution overcomes integration time, the bats receiving antennas reverberate for about 100 μs, smearing the echoes upon arrival. The bat overcomes this problem by receiving echoes from different directions than the transmitted sound, which radiates from the mouth. The broad range of antenna reverberations common to the emission and echoes thus cancel out, leaving only narrow elevation-dependent differences, which in fact appear in the bats images. The SCAT algorithms successfully recreate images comparable to those perceived by the bat and provide for classification of targets from their glint structure in different situations.
Journal of the Acoustical Society of America | 1997
Janine M. Wotton; David J. Hartley
The acoustic information used by bats is produced by a combination of the properties of the sound emission and the reception at the eardrum. The potential localization cues used by bats can only be fully revealed when the magnitude spectra of the emission and the external ear are convolved to produce the echolocation combination magnitude spectra. The spatially dependent changes in the magnitude spectra of the echolocation combination of Eptesicus fuscus are described. The emission and external ear magnitude spectra act together to enhance the potential localization cues. In the echolocation combination, the spectral peaks are sharpened and there is greater contrast in intensity between peaks and notches when compared to the spectra of the ear alone. The spectral localization cues in the echolocation combination appear to be restricted to a cone of space of approximately +/-30 degrees.
Journal of the Acoustical Society of America | 1997
Janine M. Wotton
The information echolocating bats receive is a combination of the properties of the sound they emit and the sound they receive at the eardrum. Convolving the emission and the external ear transfer functions produces the full spectral information contained in the echolocation combination. Spatially dependent changes in the magnitude spectra of the emission, external ear transfer functions, and the echolocation combination of Eptesicus fuscus could provide localization information to the bat. Principal component analysis was used to reduce the dimensionality of these complex spectral data sets. The first eight principal component weights were normalized, rotated, and used as the input to a backpropagation network model which examined the relative directionality of the emission, ear, and the echolocation combination. The model was able to localize more accurately when provided with the directional information of the echolocation combination compared to either the emission or ear information alone.
Journal of the Acoustical Society of America | 2006
Janine M. Wotton; Kristin Welsh; Crystal Smith; Rachel Elvebak; Samantha Haseltine; Barbara G. Shinn-Cunningham
Sentences recorded with a Mid‐western accent were convolved with head‐related impulse responses that included different room reverberation conditions. The stimuli were presented binaurally through headphones in an echo‐attenuated chamber and subjects (n=23) typed the sentences they heard. The target word was one of a vowel pair (cattle/kettle, jam/gem, gas/guess, past/pest) embedded as the second word in one of three sentence types. The neutral sentence provided little context for the word. Target words in sentences that provided strong contextual cues could be congruent or incongruent with the expectations of the subject, for example, ‘‘The cattle/kettle grazed in the meadow.’’ Subjects made significantly more errors in the incongruent sentences compared to the neutral (Wilcoxon=3.572 p<0.05) or congruent sentences (Wilcoxon=3.56 p<0.05). When the target word was in a congruent sentence subjects performed equally well in reverberant or pseudo‐anechoic conditions (Wilcoxon=1.298) but they made more errors...
Journal of the Acoustical Society of America | 2005
Janine M. Wotton; Kimberly McArthur; Amit Bohara; Michael J. Ferragamo; Andrea Megela Simmons
Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time‐based coding of signal envelope, and others for rate‐based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency‐dependent response rate at different levels of the frog auditory system. Anuran auditory‐nerve fibers are unusual in that they reveal frequency‐dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155–1162 (1984)], and therefore provide rate‐based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short‐term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory‐nerve fibers, dorsal medullary nucleus neuron...
Journal of the Acoustical Society of America | 2001
Janine M. Wotton; Michael J. Ferragamo; Timothy M. Sonbuchner; Mark I. Sanderson
The big brown bat, Eptesicus fuscus, uses echolocation to locate prey and displays extraordinary acuity in the perception of temporal cues in acoustic signals. Behaviorally the bat can detect changes at submicrosecond levels but individual neurons in the inferior colliculus (IC) and cortex operate with much less precision. Most of these cells are poor temporal markers with response variation on the order of a few milliseconds and some in tens of milliseconds. A temporal estimator was created incorporating the response properties of recorded neurons and behaviorally appropriate limitations on the number of echolocation emissions. The response of the neurons can be characterized as probability density functions in time and frequency. The characteristics of these neurons were used to create large simulated populations of IC and cortical neurons that show the full range of recorded variation. The connections between these two populations were simulated using a self‐organizing neural network. If more than one ...
Journal of the Acoustical Society of America | 2000
Janine M. Wotton; James A. Simmons
Journal of the Acoustical Society of America | 1996
Janine M. Wotton; Tim Haresign; Michael J. Ferragamo; James A. Simmons