Zoe Kourtzi
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zoe Kourtzi.
Vision Research | 2001
Kalanit Grill-Spector; Zoe Kourtzi; Nancy Kanwisher
Here we review recent findings that reveal the functional properties of extra-striate regions in the human visual cortex that are involved in the representation and perception of objects. We characterize both the invariant and non-invariant properties of these regions and we discuss the correlation between activation of these regions and recognition. Overall, these results indicate that the lateral occipital complex plays an important role in human object recognition.
Journal of Cognitive Neuroscience | 2000
Zoe Kourtzi; Nancy Kanwisher
A still photograph of an object in motion may convey dynamic information about the position of the object immediately before and after the photograph was taken (implied motion). Medial temporal/medial superior temporal cortex (MT/MST) is one of the main brain regions engaged in the perceptual analysis of visual motion. In two experiments we examined whether MT/MST is also involved in representing implied motion from static images. We found stronger functional magnetic resonance imaging (fMRI) activation within MT/MST during viewing of static photographs with implied motion compared to viewing of photographs without implied motion. These results suggest that brain regions involved in the visual analysis of motion are also engaged in processing implied dynamic information from static images.
Neuron | 2003
Zoe Kourtzi; As Tolias; Christian F. Altmann; M Augath; Nk Logothetis
The integration of local image features into global shapes was investigated in monkeys and humans using fMRI. An adaptation paradigm was used, in which stimulus selectivity was deduced by changes in the course of adaptation of a pattern of randomly oriented elements. Accordingly, we observed stronger activity when orientation changes in the adapting stimulus resulted in a collinear contour than a different random pattern. This selectivity to collinear contours was observed not only in higher visual areas that are implicated in shape processing, but also in early visual areas where selectivity depended on the receptive field size. These findings suggest that unified shape perception in both monkeys and humans involves multiple visual areas that may integrate local elements to global shapes at different spatial scales.
Nature Neuroscience | 2002
Zoe Kourtzi; Hh Bülthoff; Michael Erb; Wolfgang Grodd
The perception of moving objects and our successful interaction with them entail that the visual system integrates shape and motion information about objects. However, neuroimaging studies have implicated different human brain regions in the analysis of visual motion (medial temporal cortex; MT/MST) and shape (lateral occipital complex; LOC), consistent with traditional approaches in visual processing that attribute shape and motion processing to anatomically and functionally separable neural mechanisms. Here we demonstrate object-selective fMRI responses (higher responses for intact than for scrambled images of objects) in MT/MST, and especially in a ventral subregion of MT/MST, suggesting that human brain regions involved mainly in the processing of visual motion are also engaged in the analysis of object shape.
The Journal of Neuroscience | 2007
Shengqiao Li; Dirk Ostwald; Martin A. Giese; Zoe Kourtzi
Despite the importance of visual categorization for interpreting sensory experiences, little is known about the neural representations that mediate categorical decisions in the human brain. Here, we used psychophysics and pattern classification for the analysis of functional magnetic resonance imaging data to predict the features critical for categorical decisions from brain activity when observers categorized the same stimuli using different rules. Although a large network of cortical and subcortical areas contain information about visual categories, we show that only a subset of these areas shape their selectivity to reflect the behaviorally relevant features rather than simply physical similarity between stimuli. Specifically, temporal and parietal areas show selectivity for the perceived form and motion similarity, respectively. In contrast, frontal areas and the striatum represent the conjunction of spatiotemporal features critical for complex and adaptive categorization tasks and potentially modulate selectivity in temporal and parietal areas. These findings provide novel evidence for flexible neural coding in the human brain that translates sensory experiences to categorical decisions by shaping neural representations across a network of areas with dissociable functional roles in visual categorization.
Annual Review of Neuroscience | 2011
Zoe Kourtzi; Charles E. Connor
Object perception is one of the most remarkable capacities of the primate brain. Owing to the large and indeterminate dimensionality of object space, the neural basis of object perception has been difficult to study and remains controversial. Recent work has provided a more precise picture of how 2D and 3D object structure is encoded in intermediate and higher-level visual cortices. Yet, other studies suggest that higher-level visual cortex represents categorical identity rather than structure. Furthermore, object responses are surprisingly adaptive to changes in environmental statistics, implying that learning through evolution, development, and also shorter-term experience during adulthood may optimize the object code. Future progress in reconciling these findings will depend on more effective sampling of the object domain and direct comparison of these competing hypotheses.
The Journal of Neuroscience | 2008
Tim Preston; Shengqiao Li; Zoe Kourtzi; Andrew E. Welchman
Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate analysis methods and high-resolution imaging. Presenting disparity-defined planes, we determine functional magnetic resonance imaging (fMRI) selectivity to near versus far depth positions. First, we test the perceptual relevance of this selectivity, comparing the pattern-based decoding of fMRI responses evoked by random dot stereograms that support depth perception (correlated RDS) with the decoding of stimuli containing disparities to which the perceptual system is blind (anticorrelated RDS). Preferential disparity selectivity for correlated stimuli in dorsal (visual and parietal) areas and higher ventral area LO (lateral occipital area) suggests encoding of perceptually relevant information, in contrast to early (V1, V2) and intermediate ventral (V3v, V4) visual cortical areas that show similar selectivity for both correlated and anticorrelated stimuli. Second, manipulating disparity parametrically, we show that dorsal areas encode the metric disparity structure of the viewed stimuli (i.e., disparity magnitude), whereas ventral area LO appears to represent depth position in a categorical manner (i.e., disparity sign). Our findings suggest that activity in both visual streams is commensurate with the use of disparity for depth perception but the neural computations may differ. Intriguingly, perceptually relevant responses in the dorsal stream are tuned to disparity content and emerge at a comparatively earlier stage than categorical representations for depth position in the ventral stream.
Perception | 2002
Ian M. Thornton; Zoe Kourtzi
In a series of three experiments, we used a sequential matching task to explore the impact of non-rigid facial motion on the perception of human faces. Dynamic prime images, in the form of short video sequences, facilitated matching responses relative to a single static prime image. This advantage was observed whenever the prime and target showed the same face but an identity match was required across expression (experiment 1) or view (experiment 2). No facilitation was observed for identical dynamic prime sequences when the matching dimension was shifted from identity to expression (experiment 3). We suggest that the observed dynamic advantage, the first reported for non-degraded facial images, arises because the matching task places more emphasis on visual working memory than typical face recognition tasks. More specifically, we believe that representational mechanisms optimised for the processing of motion and/or change-over-time are established and maintained in working memory and that such ‘dynamic representations’ (Freyd, 1987 Psychological Review 94 427–438) capitalise on the increased information content of the dynamic primes to enhance performance.
Current Opinion in Neurobiology | 2006
Zoe Kourtzi; James J. DiCarlo
The capability of the adult primate visual system for rapid and accurate recognition of targets in cluttered, natural scenes far surpasses the abilities of state-of-the-art artificial vision systems. Understanding this capability remains a fundamental challenge in visual neuroscience. Recent experimental evidence suggests that adaptive coding strategies facilitated by underlying neural plasticity enable the adult brain to learn from visual experience and shape its ability to integrate and recognize coherent visual objects.
Journal of Neurophysiology | 2008
Dirk Ostwald; Judith Mi Lin Lam; Shengqiao Li; Zoe Kourtzi
Extensive psychophysical and computational work proposes that the perception of coherent and meaningful structures in natural images relies on neural processes that convert information about local edges in primary visual cortex to complex object features represented in the temporal cortex. However, the neural basis of these mid-level vision mechanisms in the human brain remains largely unknown. Here, we examine functional MRI (fMRI) selectivity for global forms in the human visual pathways using sensitive multivariate analysis methods that take advantage of information across brain activation patterns. We use Glass patterns, parametrically varying the perceived global form (concentric, radial, translational) while ensuring that the local statistics remain similar. Our findings show a continuum of integration processes that convert selectivity for local signals (orientation, position) in early visual areas to selectivity for global form structure in higher occipitotemporal areas. Interestingly, higher occipitotemporal areas discern differences in global form structure rather than low-level stimulus properties with higher accuracy than early visual areas while relying on information from smaller but more selective neural populations (smaller voxel pattern size), consistent with global pooling mechanisms of local orientation signals. These findings suggest that the human visual system uses a code of increasing efficiency across stages of analysis that is critical for the successful detection and recognition of objects in complex environments.