Kiley Seymour
Macquarie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kiley Seymour.
Psychological Science | 2014
Timo Stein; Kiley Seymour; Martin N. Hebart; Philipp Sterzer
Signals of threat—such as fearful faces—are processed with priority and have privileged access to awareness. This fear advantage is commonly believed to engage a specialized subcortical pathway to the amygdala that bypasses visual cortex and processes predominantly low-spatial-frequency information but is largely insensitive to high spatial frequencies. We tested visual detection of low- and high-pass-filtered fearful and neutral faces under continuous flash suppression and sandwich masking, and we found consistently that the fear advantage was specific to high spatial frequencies. This demonstrates that rapid fear detection relies not on low- but on high-spatial-frequency information—indicative of an involvement of cortical visual areas. These findings challenge the traditional notion that a subcortical pathway to the amygdala is essential for the initial processing of fear signals and support the emerging view that the cerebral cortex is crucial for the processing of ecologically relevant signals.
NeuroImage | 2014
Bianca van Kemenade; Kiley Seymour; Evelin Wacker; Bernhard Spitzer; Felix Blankenburg; Philipp Sterzer
The human motion complex hMT+/V5 is activated not only by visual motion, but also by tactile and auditory motion. Whilst direction-selectivity has been found within this complex for visual and auditory stimuli, it is unknown whether hMT+/V5 also contains direction-specific information from the tactile modality. In the current study, we sought to investigate whether hMT+/V5 contains direction-specific information about visual/tactile moving stimuli. Leftward and rightward moving stimuli were presented in the visual and tactile modalities in an event-related fMRI design. Using region-of-interest-based multivariate pattern analysis we could decode the two motion directions for both tactile and visual stimuli in hMT+/V5. The activity patterns of the two modalities differed significantly, indicating that motion direction information from different modalities may be carried by distinct sets of neuronal populations. Our findings show that hMT+/V5 contains specific information about the direction of a moving stimulus in both the tactile and visual modalities, supporting the theory of hMT+/V5 being a multimodal motion area.
Neuropsychopharmacology | 2013
Kiley Seymour; Timo Stein; Lia Lira Olivier Sanders; Matthias Guggenmos; Ines Theophil; Philipp Sterzer
Schizophrenia is typically associated with higher-level cognitive symptoms, such as disorganized thoughts, delusions, and hallucinations. However, deficits in visual processing have been consistently reported with the illness. Here, we provide strong neurophysiological evidence for a marked perturbation at the earliest level of cortical visual processing in patients with paranoid schizophrenia. Using functional magnetic resonance imaging (fMRI) and adapting a well-established approach from electrophysiology, we found that orientation-specific contextual modulation of cortical responses in human primary visual cortex (V1)—a hallmark of early neural encoding of visual stimuli—is dramatically reduced in patients with schizophrenia. This indicates that contextual processing in schizophrenia is altered at the earliest stages of visual cortical processing and supports current theories that emphasize the role of abnormalities in perceptual synthesis (eg, false inference) in schizophrenia.
Vision Research | 2009
J. Scott McDonald; Kiley Seymour; Mark M. Schira; Branka Spehar; Colin W. G. Clifford
The responses of orientation-selective neurons in primate visual cortex can be profoundly affected by the presence and orientation of stimuli falling outside the classical receptive field. Our perception of the orientation of a line or grating also depends upon the context in which it is presented. For example, the perceived orientation of a grating embedded in a surround tends to be repelled from the predominant orientation of the surround. Here, we used fMRI to investigate the basis of orientation-specific surround effects in five functionally-defined regions of visual cortex: V1, V2, V3, V3A/LO1 and hV4. Test stimuli were luminance-modulated and isoluminant gratings that produced responses similar in magnitude. Less BOLD activation was evident in response to gratings with parallel versus orthogonal surrounds across all the regions of visual cortex investigated. When an isoluminant test grating was surrounded by a luminance-modulated inducer, the degree of orientation-specific contextual modulation was no larger for extrastriate areas than for V1, suggesting that the observed effects might originate entirely in V1. However, more orientation-specific modulation was evident in extrastriate cortex when both test and inducer were luminance-modulated gratings than when the test was isoluminant; this difference was significant in area V3. We suggest that the pattern of results in extrastriate cortex may reflect a refinement of the orientation-selectivity of surround suppression specific to the colour of the surround or, alternatively, processes underlying the segmentation of test and inducer by spatial phase or orientation when no colour cue is available.
Vision Research | 2009
Kiley Seymour; J. Scott McDonald; Colin W. G. Clifford
We used identification at threshold to systematically measure binding costs in two visual modalities. We presented a conjunction of two features as a signal stimulus and concurrently measured detection and identification performance as a function of three threshold variables: duration, contrast and coherence. Discrepancies between detection and identification sensitivity functions demonstrated a consistent processing cost to visual feature binding. Our findings suggest that feature binding is indeed a genuine problem for the brain to solve. This simple paradigm can transfer across arbitrary feature combinations and is therefore suitable to use in experiments addressing mechanisms of sensory integration.
Neuroreport | 2008
Kiley Seymour; Hans-Otto Karnath; Marc Himmelbach
The perception of global scenes and objects consisting of multiple constituents is based on the integration of local elements or features. Gestalt grouping cues, such as proximity or similarity, can aid this process. Using functional MRI we investigated whether grouping guided by different gestalt cues rely on distinct networks in the brain or share a common network. Our study revealed that gestalt grouping involved the inferior parietal cortex, middle temporal gyrus and prefrontal cortex irrespective of the specific cue used. These findings agree with observations in neurological patients, which suggest that inferior parietal regions may aid the integration of local features into a global gestalt. Damage to this region results in simultanagnosia, a deficit in perceiving multiple objects and global scenes.
The Journal of Neuroscience | 2017
Susan G. Wardle; J. Brendan Ritchie; Kiley Seymour; Thomas A. Carlson
Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether “edge-related activity” underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxels overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. SIGNIFICANCE STATEMENT A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding.
Journal of Neurophysiology | 2012
Kiley Seymour; Colin W. G. Clifford
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Cortex | 2014
Bianca van Kemenade; Kiley Seymour; Thomas B. Christophel; Marcus Rothkirch; Philipp Sterzer
When two gratings drifting in different directions are superimposed, the resulting stimulus can be perceived as two overlapping component gratings moving in different directions or as a single pattern moving in one direction. Whilst the motion direction of component gratings is already represented in visual area V1, the majority of previous studies have found processing of pattern motion direction only from visual area V2 onwards. Here, we question these findings using multi-voxel pattern analysis (MVPA). In Experiment 1, we presented superimposed sinusoidal gratings with varying angles between the two component motions. These stimuli were perceived as patterns moving in one of two possible directions. We found that linear support vector machines (SVMs) could generalise across stimuli composed of different component motions to successfully discriminate pattern motion direction from brain activity in V1, V3A and hMT+/V5. This demonstrates the representation of pattern motion information present in these visual areas. This conclusion was verified in Experiment 2, where we manipulated similar plaid stimuli to induce the perception of either a single moving pattern or two separate component gratings. While a classifier could again generalise across stimuli composed of different component motions when they were perceived as a single moving pattern, its performance dropped substantially in the case where components were perceived. Our results indicate that pattern motion direction information is present in V1.
Quarterly Journal of Experimental Psychology | 2017
Robyn Langdon; Kiley Seymour; Tracey A. Williams; Philip B. Ward
Explicit tests of social cognition have revealed pervasive deficits in schizophrenia. Less is known of automatic social cognition in schizophrenia. We used a spatial orienting task to investigate automatic shifts of attention cued by another persons eye gaze in 29 patients and 28 controls. Central photographic images of a face with eyes shifted left or right, or looking straight ahead, preceded targets that appeared left or right of the cue. To examine automatic effects, cue direction was non-predictive of target location. Cue–target intervals were 100, 300, and 800 ms. In non-social control trials, arrows replaced eye-gaze cues. Both groups showed automatic attentional orienting indexed by faster reaction times (RTs) when arrows were congruent with target location across all cue–target intervals. Similar congruency effects were seen for eye-shift cues at 300 and 800 ms intervals, but patients showed significantly larger congruency effects at 800 ms, which were driven by delayed responses to incongruent target locations. At short 100-ms cue–target intervals, neither group showed faster RTs for congruent than for incongruent eye-shift cues, but patients were significantly slower to detect targets after direct-gaze cues. These findings conflict with previous studies using schematic line drawings of eye-shifts that have found automatic attentional orienting to be reduced in schizophrenia. Instead, our data indicate that patients display abnormalities in responding to gaze direction at various stages of gaze processing—reflected by a stronger preferential capture of attention by another persons direct eye contact at initial stages of gaze processing and difficulties disengaging from a gazed-at location once shared attention is established.