Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Lacey is active.

Publication


Featured researches published by Simon Lacey.


NeuroImage | 2011

Art for Reward’s Sake: Visual Art Recruits the Ventral Striatum

Simon Lacey; Henrik Hagtvedt; Vanessa M. Patrick; Amy Anderson; Randall Stilla; Gopikrishna Deshpande; Xiaoping Hu; João Ricardo Sato; Srinivas K. Reddy; K. Sathian

A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non-art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value.


Brain Topography | 2009

A Putative Model of Multisensory Object Representation

Simon Lacey; Noa Tal; Amir Amedi; K. Sathian

This review surveys the recent literature on visuo-haptic convergence in the perception of object form, with particular reference to the lateral occipital complex (LOC) and the intraparietal sulcus (IPS) and discusses how visual imagery or multisensory representations might underlie this convergence. Drawing on a recent distinction between object- and spatially-based visual imagery, we propose a putative model in which LOtv, a subregion of LOC, contains a modality-independent representation of geometric shape that can be accessed either bottom-up from direct sensory inputs or top-down from frontoparietal regions. We suggest that such access is modulated by object familiarity: spatial imagery may be more important for unfamiliar objects and involve IPS foci in facilitating somatosensory inputs to the LOC; by contrast, object imagery may be more critical for familiar objects, being reflected in prefrontal drive to the LOC.


NeuroImage | 2010

Object Familiarity Modulates Effective Connectivity During Haptic Shape Perception

Gopikrishna Deshpande; Xiaoping Hu; Simon Lacey; Randall Stilla; K. Sathian

In the preceding paper (Lacey, S., Flueckiger, P., Stilla, R., Lava, M., Sathian, K., 2009a. Object familiarity modulates involvement of visual imagery in haptic shape perception), we showed that the activations evoked by visual imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. Here we used task-specific analyses of functional and effective connectivity to provide convergent evidence. These analyses showed that the visual imagery and familiar haptic shape tasks activated similar networks, whereas the unfamiliar haptic shape task activated a different network. Multivariate Granger causality analyses of effective connectivity, in both a conventional form and one purged of zero-lag correlations, showed that the visual imagery and familiar haptic shape networks involved top-down paths from prefrontal cortex into the lateral occipital complex (LOC), whereas the unfamiliar haptic shape network was characterized by bottom-up, somatosensory inputs into the LOC. We conclude that shape representations in the LOC are flexibly accessible, either top-down or bottom-up, according to task demands, and that visual imagery is more involved in LOC activation during haptic shape perception when objects are familiar, compared to unfamiliar.


NeuroImage | 2011

DUAL PATHWAYS FOR HAPTIC AND VISUAL PERCEPTION OF SPATIAL AND TEXTURE INFORMATION

K. Sathian; Simon Lacey; Randall Stilla; Gregory Gibson; Gopikrishna Deshpande; Xiaoping Hu; Stephen M. LaConte; Christopher Glielmi

Segregation of information flow along a dorsally directed pathway for processing object location and a ventrally directed pathway for processing object identity is well established in the visual and auditory systems, but is less clear in the somatosensory system. We hypothesized that segregation of location vs. identity information in touch would be evident if texture is the relevant property for stimulus identity, given the salience of texture for touch. Here, we used functional magnetic resonance imaging (fMRI) to investigate whether the pathways for haptic and visual processing of location and texture are segregated, and the extent of bisensory convergence. Haptic texture-selectivity was found in the parietal operculum and posterior visual cortex bilaterally, and in parts of left inferior frontal cortex. There was bisensory texture-selectivity at some of these sites in posterior visual and left inferior frontal cortex. Connectivity analyses demonstrated, in each modality, flow of information from unisensory non-selective areas to modality-specific texture-selective areas and further to bisensory texture-selective areas. Location-selectivity was mostly bisensory, occurring in dorsal areas, including the frontal eye fields and multiple regions around the intraparietal sulcus bilaterally. Many of these regions received input from unisensory areas in both modalities. Together with earlier studies, the activation and connectivity analyses of the present study establish that somatosensory processing flows into segregated pathways for location and object identity information. The location-selective somatosensory pathway converges with its visual counterpart in dorsal frontoparietal cortex, while the texture-selective somatosensory pathway runs through the parietal operculum before converging with its visual counterpart in visual and frontal cortex. Both segregation of sensory processing according to object property and multisensory convergence appear to be universal organizing principles.


Brain and Language | 2012

METAPHORICALLY FEELING: COMPREHENDING TEXTURAL METAPHORS ACTIVATES SOMATOSENSORY CORTEX

Simon Lacey; Randall Stilla; K. Sathian

Conceptual metaphor theory suggests that knowledge is structured around metaphorical mappings derived from physical experience. Segregated processing of object properties in sensory cortex allows testing of the hypothesis that metaphor processing recruits activity in domain-specific sensory cortex. Using functional magnetic resonance imaging (fMRI) we show that texture-selective somatosensory cortex in the parietal operculum is activated when processing sentences containing textural metaphors, compared to literal sentences matched for meaning. This finding supports the idea that comprehension of metaphors is perceptually grounded.


NeuroImage | 2010

Object familiarity modulates the relationship between visual object imagery and haptic shape perception.

Simon Lacey; Peter Flueckiger; Randall Stilla; Michael Lava; K. Sathian

Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., this issue), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.


PLOS ONE | 2007

Cross-Modal Object Recognition Is Viewpoint-Independent

Simon Lacey; Andrew Peters; K. Sathian

Background Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. Methodology/Principal Findings Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. Conclusions/Significance The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.


Neuropsychologia | 2014

Spatial imagery in haptic shape perception.

Simon Lacey; Randall Stilla; Karthik Sreenivasan; Gopikrishna Deshpande; K. Sathian

We have proposed that haptic activation of the shape-selective lateral occipital complex (LOC) reflects a model of multisensory object representation in which the role of visual imagery is modulated by object familiarity. Supporting this, a previous functional magnetic resonance imaging (fMRI) study from our laboratory used inter-task correlations of blood oxygenation level-dependent (BOLD) signal magnitude and effective connectivity (EC) patterns based on the BOLD signals to show that the neural processes underlying visual object imagery (objIMG) are more similar to those mediating haptic perception of familiar (fHS) than unfamiliar (uHS) shapes. Here we employed fMRI to test a further hypothesis derived from our model, that spatial imagery (spIMG) would evoke activation and effective connectivity patterns more related to uHS than fHS. We found that few of the regions conjointly activated by spIMG and either fHS or uHS showed inter-task correlations of BOLD signal magnitudes, with parietal foci featuring in both sets of correlations. This may indicate some involvement of spIMG in HS regardless of object familiarity, contrary to our hypothesis, although we cannot rule out alternative explanations for the commonalities between the networks, such as generic imagery or spatial processes. EC analyses, based on inferred neuronal time series obtained by deconvolution of the hemodynamic response function from the measured BOLD time series, showed that spIMG shared more common paths with uHS than fHS. Re-analysis of our previous data, using the same EC methods as those used here, showed that, by contrast, objIMG shared more common paths with fHS than uHS. Thus, although our model requires some refinement, its basic architecture is supported: a stronger relationship between spIMG and uHS compared to fHS, and a stronger relationship between objIMG and fHS compared to uHS.


Experimental Brain Research | 2009

PERCEPTUAL LEARNING OF VIEW-INDEPENDENCE IN VISUO-HAPTIC OBJECT REPRESENTATIONS

Simon Lacey; Marisa Pappas; Alexandra Kreps; Kevin Lee; K. Sathian

We previously showed that cross-modal recognition of unfamiliar objects is view-independent, in contrast to view-dependence within-modally, in both vision and haptics. Does the view-independent, bisensory representation underlying cross-modal recognition arise from integration of unisensory, view-dependent representations or intermediate, unisensory but view-independent representations? Two psychophysical experiments sought to distinguish between these alternative models. In both experiments, participants began from baseline, within-modal, view-dependence for object recognition in both vision and haptics. The first experiment induced within-modal view-independence by perceptual learning, which was completely and symmetrically transferred cross-modally: visual view-independence acquired through visual learning also resulted in haptic view-independence and vice versa. In the second experiment, both visual and haptic view-dependence were transformed to view-independence by either haptic-visual or visual-haptic cross-modal learning. We conclude that cross-modal view-independence fits with a model in which unisensory view-dependent representations are directly integrated into a bisensory, view-independent representation, rather than via intermediate, unisensory, view-independent representations.


Frontiers in Psychology | 2014

Visuo-haptic multisensory object recognition, categorization, and representation

Simon Lacey; Krishnankutty Sathian

Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.

Collaboration


Dive into the Simon Lacey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoping Hu

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge