John M. Deeks
Cognition and Brain Sciences Unit
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John M. Deeks.
Journal of Experimental Psychology: Human Perception and Performance | 2004
Rhodri Cusack; John M. Deeks; Genevieve Aikman; Robert P. Carlyon
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence suggests it is affected by attention. In Experiments 1 and 2, it is shown that the effect of attention is not a general suppression of streaming on an unattended side of the ascending auditory pathway or in unattended frequency regions. Experiments 3 and 4 investigate the effect on streaming of physical gaps in the sequence and of brief switches in attention away from a sequence. The results demonstrate that after even short gaps or brief switches in attention, streaming is reset. The implications are discussed, and a hierarchical decomposition model is proposed.
Jaro-journal of The Association for Research in Otolaryngology | 2010
Robert P. Carlyon; Olivier Macherey; Johan H. M. Frijns; Patrick Axon; Randy K. Kalkman; Patrick Boyle; David M. Baguley; John A. G. Briggs; John M. Deeks; Jeroen J. Briaire; Xavier Barreau; René Dauman
Four cochlear implant users, having normal hearing in the unimplanted ear, compared the pitches of electrical and acoustic stimuli presented to the two ears. Comparisons were between 1,031-pps pulse trains and pure tones or between 12 and 25-pps electric pulse trains and bandpass-filtered acoustic pulse trains of the same rate. Three methods—pitch adjustment, constant stimuli, and interleaved adaptive procedures—were used. For all methods, we showed that the results can be strongly influenced by non-sensory biases arising from the range of acoustic stimuli presented, and proposed a series of checks that should be made to alert the experimenter to those biases. We then showed that the results of comparisons that survived these checks do not deviate consistently from the predictions of a widely-used cochlear frequency-to-place formula or of a computational cochlear model. We also demonstrate that substantial range effects occur with other widely used experimental methods, even for normal-hearing listeners.
Jaro-journal of The Association for Research in Otolaryngology | 2008
Olivier Macherey; Robert P. Carlyon; Astrid Van Wieringen; John M. Deeks; Jan Wouters
Most contemporary cochlear implants (CIs) stimulate the auditory nerve with trains of amplitude-modulated, symmetric biphasic pulses. Although both polarities of a pulse can depolarize the nerve fibers and generate action potentials, it remains unknown which of the two (positive or negative) phases has the stronger effect. Understanding the effects of pulse polarity will help to optimize the stimulation protocols and to deliver the most relevant information to the implant listeners. Animal experiments have shown that cathodic (negative) current flows are more effective than anodic (positive) ones in eliciting neural responses, and this finding has motivated the development of novel speech-processing algorithms. In this study, we show electrophysiologically and psychophysically that the human auditory system exhibits the opposite pattern, being more sensitive to anodic stimulation. We measured electrically evoked compound action potentials in CI listeners for phase-separated pulses, allowing us to tease out the responses to each of the two opposite-polarity phases. At an equal stimulus level, the anodic phase yielded the larger response. Furthermore, a measure of psychophysical masking patterns revealed that this polarity difference was still present at higher levels of the auditory system and was therefore not solely due to antidromic propagation of the neural response. This finding may relate to a particular orientation of the nerve fibers relative to the electrode or to a substantial degeneration and demyelination of the peripheral processes. Potential applications to improve CI speech-processing strategies are discussed.
Journal of the Acoustical Society of America | 2002
Robert P. Carlyon; John M. Deeks
We investigated the limits of temporal pitch processing under conditions where the place and rate of stimulation on the basilar membrane were independent. Stimuli were harmonic complexes passed through a fixed bandpass filter and resembled filtered pulse trains. The task was to detect a difference in F0. When the harmonics were filtered between 3900-5400 Hz, presented monaurally, and summed in sine phase, subjects could perform the task at all FOs studied. However, when the pulse rate was doubled by summing components in alternating phase, thresholds increased with increasing F0 until the task was impossible at F0 = 300 Hz (pulse rate=600 pps). Thresholds improved again at higher FOs, presumably because some harmonics became resolved. The F0 at which this breakdown occurred decreased when the complexes were filtered into a lower frequency region, and increased when they were filtered into a higher region. In the highest region tested (7800-10800 Hz), all listeners could detect an increase of less than about 20% re: a pulse rate of 600 pps for alternating-phase complexes. Presenting a copy of the standard (lower-F0) stimulus to the contralateral ear during all intervals of a forced-choice trial improved performance markedly under conditions where monaural rate discrimination was very poor. This showed that temporal information is present in the auditory nerve that is unavailable to the temporal pitch mechanism, but which is accessible when a binaural cue is available. The results are compared to the inability of most cochlear implantees to detect increases in the rate of electrical pulse trains above about 300 pps. It is concluded that this inability is unlikely to result entirely from a central pitch limitation, because, with analogous acoustic stimulation, normal listeners can perform the task at substantially higher rates.
Journal of the Acoustical Society of America | 2009
Ying-Yee Kong; John M. Deeks; Patrick Axon; Robert P. Carlyon
A common finding in the cochlear implant literature is that the upper limit of rate discrimination on a single channel is about 300 pps. The present study investigated rate discrimination using a procedure in which, in each block of two-interval trials, the standard could have one of the five baseline rates (100, 200, 300, 400, and 500 pps) and the signal rate was a given percentage higher than the standard. Eight Med-El C40+ subjects took part. The pattern of results was different than those reported previously: six Med-El subjects performed better at medium rates (200-300 pps) compared to both lower (100 pps) and higher (400-500 pps) rates. A similar pattern of results was obtained both with the method of constant stimuli and for 5000-pps pulse trains amplitude modulated at rates between 100 and 500 Hz. Compared to an unmatched group of eight Nucleus CI24 listeners tested using a similar paradigm and stimuli, Med-El subjects performed significantly better at 300 pps and higher but slightly worse at 100 pps. These results are discussed in relation to evidence on the limits of temporal pitch at low and high rates in normal-hearing listeners.
Hearing Research | 2005
Robert P. Carlyon; Astrid Van Wieringen; John M. Deeks; Christopher J. Long; Johannes Lyzenga; Jan Wouters
Human behavioral thresholds for trains of biphasic pulses applied to a single channel of Nucleus CI24 and LAURA cochlear implants were measured as a function of inter-phase gap (IPG). Experiment 1 used bipolar stimulation, a 100-pps pulse rate, and a 400-ms stimulus duration. In one condition, the two phases of each pulse had opposite polarity. Thresholds continued to drop by 9-10 dB as IPG was increased from near zero to the longest value tested (2900 micros for CI24, 4900 micros for LAURA). This time course is much longer than reported for single-cell recordings from animals. In a second condition, the two phases of each pulse had the same polarity, which alternated from pulse to pulse. Thresholds were independent of IPG, and similar to those in condition 1 at IPG=4900 micros. Experiment 2 used monopolar stimulation. One condition was similar to condition 1 of experiment 1, and thresholds also dropped up to the longest IPG studied (2900 micros). This also happened when the pulse rate was reduced to 20 pps, and when only a single pulse was presented on each trial. Keeping IPG constant at 8 micros and adding an extra biphasic pulse x ms into each period produced thresholds that were roughly independent of x, indicating that the effect of IPG in the other conditions was not due to a release from refractoriness at sites central to the auditory nerve. Experiment 3 measured thresholds at three IPGs, which were less than, equal to, and more than one half of the interval between successive pulses. Thresholds were lowest at the intermediate IPG. The results of all experiments could be fit by a linear model consisting of a lowpass filter based on the function relating threshold to the frequency of sinusoidal electrical stimulation. The data and model have implications for reducing the power consumption of cochlear implants.
Journal of the Acoustical Society of America | 2004
John M. Deeks; Robert P. Carlyon
Two experiments used simulations of cochlear implant hearing to investigate the use of temporal codes in speech segregation. Sentences were filtered into six bands, and their envelopes used to modulate filtered alternating-phase harmonic complexes with rates of 80 or 140 pps. Experiment 1 showed that identification of single sentences was better for the higher rate. In experiment 2, maskers (time-reversed concatenated sentences) were scaled by -9 dB relative to a target sentence, which was added with an offset of 1.2 s. When the target and masker were each processed on all six channels, and then summed, processing the masker on a different rate to the target improved performance only when the target rate was 140 pps. When the target sentence was processed on the odd-numbered channels and the masker on the even-numbered channels, or vice versa, performance was worse overall, but showed similar effects of pulse rate. The results, combined with recent psychophysical evidence, suggest that differences in pulse rate are unlikely to prove useful for concurrent sound segregation.
Journal of the Acoustical Society of America | 2004
Robert P. Carlyon; Christophe Micheyl; John M. Deeks; Brian C. J. Moore
Auditory processing of frequency modulation (FM) was explored. In experiment 1, detection of a tau-radians modulator phase shift deteriorated as modulation rate increased from 2.5 to 20 Hz, for 1- and 6-kHz carriers. In experiment 2, listeners discriminated between two 1-kHz carriers, where, mid-way through, the 10-Hz frequency modulator had either a phase shift or increased in depth by deltaD% for half a modulator period. Discrimination was poorer for deltaD = 4% than for smaller or larger increases. These results are consistent with instantaneous frequency being smoothed by a time window with a total duration of about 110 ms. In experiment 3, the central 200-ms of a 1-s 1-kHz carrier modulated at 5 Hz was replaced by noise, or by a faster FM applied to a more intense 1-kHz carrier. Listeners heard the 5-Hz FM continue at the same depth throughout the stimulus. Experiments 4 and 5 showed that, after an FM tone had been interrupted by a 200-ms noise, listeners were insensitive to the phase at which the FM resumed. It is argued that the auditory system explicitly encodes the presence, and possibly the rate and depth, of FM in a way that does not preserve information on FM phase.
Current Biology | 2013
Alexander J. Billig; Matthew H. Davis; John M. Deeks; Jolijn Monstrey; Robert P. Carlyon
Summary Biologically salient sounds, including speech, are rarely heard in isolation. Our brains must therefore organize the input arising from multiple sources into separate “streams” and, in the case of speech, map the acoustic components of the target signal onto meaning. These auditory and linguistic processes have traditionally been considered to occur sequentially and are typically studied independently [1, 2]. However, evidence that streaming is modified or reset by attention [3], and that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level factors may influence perceptual organization. In two experiments, listeners heard sequences of repeated words or acoustically matched nonwords. After several presentations, they reported that the initial /s/ sound in each syllable formed a separate stream; the percept then fluctuated between the streamed and fused states in a bistable manner. In addition to measuring these verbal transformations, we assessed streaming objectively by requiring listeners to detect occasional targets—syllables containing a gap after the initial /s/. Performance was better when streaming caused the syllables preceding the target to transform from words into nonwords, rather than from nonwords into words. Our results show that auditory stream formation is influenced not only by the acoustic properties of speech sounds, but also by higher-level processes involved in recognizing familiar words.
Journal of the Acoustical Society of America | 2008
Robert P. Carlyon; Christopher J. Long; John M. Deeks
Experiment 1 measured rate discrimination of electric pulse trains by bilateral cochlear implant (CI) users, for standard rates of 100, 200, and 300 pps. In the diotic condition the pulses were presented simultaneously to the two ears. Consistent with previous results with unilateral stimulation, performance deteriorated at higher standard rates. In the signal interval of each trial in the dichotic condition, the standard rate was presented to the left ear and the (higher) signal rate was presented to the right ear; the non-signal intervals were the same as in the diotic condition. Performance in the dichotic condition was better for some listeners than in the diotic condition for standard rates of 100 and 200 pps, but not at 300 pps. It is concluded that the deterioration in rate discrimination observed for CI users at high rates cannot be alleviated by the introduction of a binaural cue, and is unlikely to be limited solely by central pitch processes. Experiment 2 performed an analogous experiment in which 300-pps acoustic pulse trains were bandpass filtered (3900-5400 Hz) and presented in a noise background to normal-hearing listeners. Unlike the results of experiment 1, performance was superior in the dichotic than in the diotic condition.