Robert P. Carlyon
Cognition and Brain Sciences Unit
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert P. Carlyon.
Journal of Experimental Psychology: Human Perception and Performance | 2001
Robert P. Carlyon; Rhodri Cusack; Jessica M. Foxton; Ian H. Robertson
Two pairs of experiments studied the effects of attention and of unilateral neglect on auditory streaming. The first pair showed that the build up of auditory streaming in normal participants is greatly reduced or absent when they attend to a competing task in the contralateral ear. It was concluded that the effective build up of streaming depends on attention. The second pair showed that patients with an attentional deficit toward the left side of space (unilateral neglect) show less stream segregation of tone sequences presented to their left than to their right ears. Streaming in their right ears was similar to that for stimuli presented to either ear of healthy and of brain-damaged controls, who showed no across-ear asymmetry. This result is consistent with an effect of attention on streaming, constrains the neural sites involved, and reveals a qualitative difference between the perception of left- and right-sided sounds by neglect patients.
Journal of the Acoustical Society of America | 1994
Trevor M. Shackleton; Robert P. Carlyon
A series of experiments investigated the influence of harmonic resolvability on the pitch of, and the discriminability of differences in fundamental frequency (F0) between, frequency-modulated (FM) harmonic complexes. Both F0 (62.5 to 250 Hz) and spectral region (LOW: 125-625 Hz, MID: 1375-1875 Hz, and HIGH: 3900-5400 Hz) were varied orthogonally. The harmonics that comprised each complex could be summed in either sine (0 degree) phase (SINE) or alternating sine-cosine (0 degree-90 degrees) phase (ALT). Stimuli were presented in a continuous pink-noise background. Pitch-matching experiments revealed that the pitch of ALT-phase stimuli, relative to SINE-phase stimuli, was increased by an octave in the HIGH region, for all F0s, but was the same as that of SINE-phase stimuli when presented in the LOW region. In the MID region, the pitch of ALT-phase relative to SINE-phase stimuli depended on F0, being an octave higher at low F0s, equal at high F0s, and unclear at intermediate F0s. The same stimuli were then used in three measures of discriminability: FM detection thresholds (FMTs), frequency difference limens (FDLs), and FM direction discrimination thresholds (FMDDTs, defined as the minimum FM depth necessary for listeners to discriminate between two complexes modulated 180 degrees out of phase with each other). For all three measures, at all F0s, thresholds were low (< 4% for FMTs, < 5% for FMDDTs, and < 1.5% for FDLs) when stimuli were presented in the LOW region, and high (> 10% for FMTs, > 7% for FMDDTs, and > 2.5% for FDLs) when presented in the HIGH region. When stimuli were presented in the MID region, thresholds were low for low F0s, and high for high F0s. Performance was not markedly affected by the phase relationship between the components of a complex, except for stimuli with intermediate F0s in the MID spectral region, where FDLs and FMDDTs were much higher for ALT-phase stimuli than for SINE-phase stimuli, consistent with their unclear pitch. This difference was much smaller when FMTs were measured. The interaction between F0 and spectral region for both sets of experiments can be accounted for by a single definition of resolvability.
Journal of Experimental Psychology: Human Perception and Performance | 2004
Rhodri Cusack; John M. Deeks; Genevieve Aikman; Robert P. Carlyon
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence suggests it is affected by attention. In Experiments 1 and 2, it is shown that the effect of attention is not a general suppression of streaming on an unattended side of the ascending auditory pathway or in unattended frequency regions. Experiments 3 and 4 investigate the effect on streaming of physical gaps in the sequence and of brief switches in attention away from a sequence. The results demonstrate that after even short gaps or brief switches in attention, streaming is reset. The implications are discussed, and a hierarchical decomposition model is proposed.
Journal of the Acoustical Society of America | 1994
Robert P. Carlyon; Trevor M. Shackleton
Four experiments measured sensitivity (d′) to differences in fundamental frequency (F0) between two simultaneously presented groups of frequency‐modulated harmonics. Each group was passed through a bandpass filter in either a LOW (125–625 Hz), MID (1375–1875 Hz), or HIGH (3900–5400 Hz) frequency region. In the first two experiments, a dynamic F0 difference (ΔF0) was created by introducing a 180° disparity between the frequency modulations imposed on the two groups. Experiment 1 measured sensitivity to such ΔF0’s between a MID group with a baseline F0 of 125 Hz and all components summed in sine phase, and a HIGH group, in four conditions. When the baseline F0 of the HIGH group was also 125 Hz, performance was good when its components were summed in sine phase and bad when they were in alternating phase. Conversely, when the HIGH F0 was 62.5 Hz, performance was better for alternating phase than for sine phase, consistent with alternating phase doubling the internal representation of HIGH group’s F0. Similar...
Journal of the Acoustical Society of America | 1984
Robert P. Carlyon; Brian C. J. Moore
These experiments were designed to assess the importance of different types of information which might be used in detecting intensity changes for pure tones. Thresholds for detecting an intensity change, expressed as 10 log (delta I/I), were measured over a wide range of frequencies and levels under conditions where one or more sources of information was either present or was removed. Spread of excitation was restricted by using bandstop noise centered at the signal frequency. Information conveyed by dynamic responses to signal onsets and offsets was eliminated by masking onsets and offsets with bursts of bandpass noise. Phase-locking information was eliminated by using high-frequency signals (above 5 kHz). Dynamic responses to signal onsets and offsets appear to play little role in intensity discrimination. Phase locking does appear to be important since Webers law or a near-miss to it was observed at low frequencies, whereas at high frequencies performance deteriorated at moderate sound levels, and improved again at high levels. A preliminary experiment, using 225-ms stimuli revealed only a small midlevel deterioration at high frequencies. However, when 30-ms stimuli were used a large deterioration was observed, performance being worse when bandstop noise was presented with the tone. Hence at short durations and high frequencies spread of excitation seems to be important: When it is restricted by bandstop noise values of 10 log (delta I/I) observed at moderate levels it can be as large as 14 dB. The results of the experiments are consistent with a bimodal distribution of thresholds in primary auditory neurons; at intermediate levels neither population will operate effectively.(ABSTRACT TRUNCATED AT 250 WORDS)
Journal of the Acoustical Society of America | 1986
Robert P. Carlyon; Brian C. J. Moore
Thresholds were compared for the detection of 20-ms sinusoidal signals presented with either continuous or gated sinusoidal pedestals of the same frequency (500 or 6500 Hz). Pedestal levels ranged from 35-80 dB SPL. For 500-Hz signals, thresholds were lower in the continuous-pedestal condition than in the gated-pedestal condition, for all pedestal levels above 35 dB SPL. When the pedestal level was 35 dB, thresholds were higher in the continuous-pedestal condition than in the gated-pedestal condition. This was also true at all pedestal levels when bandstop noise centered around the pedestal frequency was added to the pedestal. For 6500-Hz signals, a deterioration in performance at intermediate levels, similar to that reported by Carlyon and Moore [J. Acoust. Soc. Am. 76, 1369-1376 (1984)], was found in the gated-pedestal condition. No such deterioration occurred in the continuous-pedestal condition. However, masking signal onsets and offsets by bursts of bandpass noise produced a midlevel deterioration in the continuous-pedestal condition. This was true when bandstop noise was absent, and when it was gated on and off in each observation interval. When continuous bandstop noise was present, no midlevel deterioration was observed, even when onsets and offsets were masked. The results suggest that in the continuous-pedestal condition subjects may normally maintain performance across level at 6500 Hz by attending to a transient response to signal onsets. Presenting bursts of bandpass noise disrupts the detection of such a response. The absence of a midlevel deterioration when continuous bandstop noise was present may be related to the adaptation to the sinusoidal pedestal that was caused by the bandstop noise.
Journal of the Acoustical Society of America | 2007
Ying-Yee Kong; Robert P. Carlyon
Speech recognition in noise improves with combined acoustic and electric stimulation compared to electric stimulation alone [Kong et al., J. Acoust. Soc. Am. 117, 1351-1361 (2005)]. Here the contribution of fundamental frequency (F0) and low-frequency phonetic cues to speech recognition in combined hearing was investigated. Normal-hearing listeners heard vocoded speech in one ear and low-pass (LP) filtered speech in the other. Three listening conditions (vocode-alone, LP-alone, combined) were investigated. Target speech (average F0=120 Hz) was mixed with a time-reversed masker (average F0=172 Hz) at three signal-to-noise ratios (SNRs). LP speech aided performance at all SNRs. Low-frequency phonetic cues were then removed by replacing the LP speech with a LP equal-amplitude harmonic complex, frequency and amplitude modulated by the F0 and temporal envelope of voiced segments of the target. The combined hearing advantage disappeared at 10 and 15 dB SNR, but persisted at 5 dB SNR. A similar finding occurred when, additionally, F0 contour cues were removed. These results are consistent with a role for low-frequency phonetic cues, but not with a combination of F0 information between the two ears. The enhanced performance at 5 dB SNR with F0 contour cues absent suggests that voicing or glimpsing cues may be responsible for the combined hearing benefit.
Brain and Language | 2008
Friedemann Pulvermüller; Yury Shtyrov; Anna S. Hasting; Robert P. Carlyon
It has been a matter of debate whether the specifically human capacity to process syntactic information draws on attentional resources or is automatic. To address this issue, we recorded neurophysiological indicators of syntactic processing to spoken sentences while subjects were distracted to different degrees from language processing. Subjects were either passively distracted, by watching a silent video film, or their attention was actively streamed away from the language input by performing a demanding acoustic signal detection task. An early index of syntactic violations, the syntactic Mismatch Negativity (sMMN), distinguished between grammatical and ungrammatical speech even under strongest distraction. The magnitude of the early sMMN (at <150ms) was unaffected by attention load of the distraction task. The independence of the early syntactic brain response of attentional distraction provides neurophysiological evidence for the automaticity of syntax and for its autonomy from other attention-demanding processes, including acoustic stimulus discrimination. The first attentional modulation of syntactic brain responses became manifest at a later stage, at approximately 200ms, thus demonstrating the narrowness of the early time window of syntactic autonomy. We discuss these results in the light of modular and interactive theories of cognitive processing and draw inferences on the automaticity of both the cognitive MMN response and certain grammar processes in general.
Acoustics Research Letters Online-arlo | 2000
Colette M. McKay; Hugh J. McDermott; Robert P. Carlyon
Four adult users of the Mini System 22 cochlear implant participated in an experiment to investigate the perceptual independence of place-of-stimulation and temporal cues for pulsatile electrical stimulation. The motivation was the relatively poor rate discrimination ability of cochlear implantees compared to the higher accuracy of temporal coding revealed by electrophysiological measurements and the performance of normal hearing listeners. The hypothesis tested was that the central auditory system can combine consistent rate and place cues in a way that is more effective than using each cue independently. Difference limens for rate change, place change, and combined rate and place change (with consistent and inconsistent cues) were compared for stimulation at low and high rates. The results were compatible with place and rate cues being used independently in the combined rate- and place-change conditions, with no advantage found for the consistent-cue conditions.
Jaro-journal of The Association for Research in Otolaryngology | 2006
Christopher J. Long; Robert P. Carlyon; Ruth Y. Litovsky; Daniel H. Downs
Nearly 100,000 deaf patients worldwide have had their hearing restored by a cochlear implant (CI) fitted to one ear. However, although many patients understand speech well in quiet, even the most successful experience difficulty in noisy situations. In contrast, normal-hearing (NH) listeners achieve improved speech understanding in noise by processing the differences between the waveforms reaching the two ears. Here we show that a form of binaural processing can be achieved by patients fitted with an implant in each ear, leading to substantial improvements in signal detection in the presence of competing sounds. The stimulus in each ear consisted of a narrowband noise masker, to which a tonal signal was sometimes added; this mixture was half-wave rectified, lowpass-filtered, and then used to modulate a 1000-pps biphasic pulse train. All four CI users tested showed significantly better signal detection when the signal was presented out of phase at the two ears than when it was in phase. This advantage occurred even though subjects only received information about the slowly varying sound envelope to be presented, contrary to previous reports that waveform fine structure dominates binaural processing. If this advantage generalizes to multichannel situations, it would demonstrate that envelope-based CI speech-processing strategies may allow patients to exploit binaural unmasking in order to improve speech understanding in noise. Furthermore, because the tested patients had been deprived of binaural hearing for eight or more years, our results show that some sensitivity to time-varying interaural cues can persist over extended periods of binaural deprivation.