Christopher W. Bishop
University of California, Davis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher W. Bishop.
NeuroImage | 2009
Antoine J. Shahin; Christopher W. Bishop; Lee M. Miller
The brain uses context and prior knowledge to repair degraded sensory inputs and improve perception. For example, listeners hear speech continuing uninterrupted through brief noises, even if the speech signal is artificially removed from the noisy epochs. In a functional MRI study, we show that this temporal filling-in process is based on two dissociable neural mechanisms: the subjective experience of illusory continuity, and the sensory repair mechanisms that support it. Areas mediating illusory continuity include the left posterior angular gyrus (AG) and superior temporal sulcus (STS) and the right STS. Unconscious sensory repair occurs in Brocas area, bilateral anterior insula, and pre-supplementary motor area. The left AG/STS and all the repair regions show evidence for word-level template matching and communicate more when fewer acoustic cues are available. These results support a two-path process where the brain creates coherent perceptual objects by applying prior knowledge and filling-in corrupted sensory information.
Journal of Cognitive Neuroscience | 2009
Christopher W. Bishop; Lee M. Miller
In noisy environments, listeners tend to hear a speakers voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speakers mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual information to improve intelligibility. Using functional magnetic resonance imaging, we show that understanding speech-in-noise is supported by a network of brain areas including the left superior parietal lobule, the motor/premotor cortex, and the left anterior superior temporal sulcus (STS), a likely apex of the acoustic processing hierarchy. Multisensory integration likely improves comprehension through improved communication between the left temporal–occipital boundary, the left medial-temporal lobe, and the left STS. This demonstrates how the brain uses information from multiple modalities to improve speech comprehension in naturalistic, acoustically adverse conditions.
Ear and Hearing | 2012
Tom Campbell; Jess R. Kerlin; Christopher W. Bishop; Lee M. Miller
Objective: To reduce stimulus transduction artifacts in EEG while using insert earphones. Design: Reference Equivalent Threshold SPLs were assessed for Etymotic ER-4B earphones in 15 volunteers. Auditory brainstem responses (ABRs) and middle latency responses (MLRs)—as well as long-duration complex ABRs—to click and /d&agr;/ speech stimuli were recorded in a single-case design. Results: Transduction artifacts occurred in raw EEG responses, but they were eliminated by shielding, counter-phasing (averaging across stimuli 180° out of phase), or rereferencing. Conclusions: Clinical grade ABRs, MLRs, and cABRs can be recorded with a standard digital EEG system and high-fidelity insert earphones, provided one or more techniques are used to remove the stimulus transduction artifact.
BMC Neuroscience | 2011
Kevin T. Hill; Christopher W. Bishop; Deepak Yadav; Lee M. Miller
BackgroundSegregating auditory scenes into distinct objects or streams is one of our brains greatest perceptual challenges. Streaming has classically been studied with bistable sound stimuli, perceived alternately as a single group or two separate groups. Throughout the last decade different methodologies have yielded inconsistent evidence about the role of auditory cortex in the maintenance of streams. In particular, studies using functional magnetic resonance imaging (fMRI) have been unable to show persistent activity within auditory cortex (AC) that distinguishes between perceptual states.ResultsWe use bistable stimuli, an explicit perceptual categorization task, and a focused region of interest (ROI) analysis to demonstrate an effect of perceptual state within AC. We find that AC has more activity when listeners perceive the split percept rather than the grouped percept. In addition, within this ROI the pattern of acoustic response across voxels is significantly correlated with the pattern of perceptual modulation. In a whole-brain exploratory test, we corroborate previous work showing an effect of perceptual state in the intraparietal sulcus.ConclusionsOur results show that the maintenance of auditory streams is reflected in AC activity, directly relating sound responses to perception, and that perceptual state is further represented in multiple, higher level cortical regions.
Frontiers in Human Neuroscience | 2012
Kevin T. Hill; Christopher W. Bishop; Lee M. Miller
The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual “stream,” such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g., frequency separation). In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG) to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state—that is the perception of one versus two auditory streams with physically identical stimuli—and changes in physical stimulus properties are reflected independently in the event-related potential (ERP) during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tones relative position within a larger sequence (1st, 2nd, 3rd) rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.
Current Biology | 2011
Christopher W. Bishop; Sam London; Lee M. Miller
Locating sounds in realistic scenes is challenging because of distracting echoes and coarse spatial acoustic estimates. Fortunately, listeners can improve performance through several compensatory mechanisms. For instance, their brains perceptually suppress short latency (1-10 ms) echoes by constructing a representation of the acoustic environment in a process called the precedence effect. This remarkable ability depends on the spatial and spectral relationship between the first or precedent sound wave and subsequent echoes. In addition to using acoustics alone, the brain also improves sound localization by incorporating spatially precise visual information. Specifically, vision refines auditory spatial receptive fields and can capture auditory perception such that sound is localized toward a coincident visual stimulus. Although visual cues and the precedence effect are each known to improve performance independently, it is not clear whether these mechanisms can cooperate or interfere with each other. Here we demonstrate that echo suppression is enhanced when visual information spatially and temporally coincides with the precedent wave. Conversely, echo suppression is inhibited when vision coincides with the echo. These data show that echo suppression is a fundamentally multisensory process in everyday environments, where vision modulates even this largely automatic auditory mechanism to organize a coherent spatial experience.
PLOS ONE | 2011
Christopher W. Bishop; Lee M. Miller
Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquists illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways.
Journal of Experimental Psychology: Human Perception and Performance | 2012
Sam London; Christopher W. Bishop; Lee M. Miller
Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference. In a rapid phenomenon known as the precedence effect, reflections are perceptually fused with the veridical primary sound. The brain can also use spatial attention to highlight a target sound at the expense of distracters. Although attention has been shown to modulate many auditory perceptual phenomena, rarely does it alter how acoustic energy is first parsed into objects, as with the precedence effect. This brief report suggests that both endogenous (voluntary) and exogenous (stimulus-driven) spatial attention have a profound influence on the precedence effect depending on where they are oriented. Moreover, we observed that both types of attention could enhance perceptual fusion while only exogenous attention could hinder it. These results demonstrate that attention, by altering how auditory objects are formed, guides the basic perceptual organization of our acoustic environment.
Journal of Neurophysiology | 2012
Christopher W. Bishop; Sam London; Lee M. Miller
Journal of the Acoustical Society of America | 2014
Christopher W. Bishop; Deepak Yadav; Sam London; Lee M. Miller