Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir Amedi is active.

Publication


Featured researches published by Amir Amedi.


Nature Neuroscience | 2001

Visuo-haptic object-related activation in the ventral visual pathway

Amir Amedi; Rafael Malach; Talma Hendler; Sharon Peled; Ehud Zohary

The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.


Nature Neuroscience | 2003

Early 'visual' cortex activation correlates with superior verbal memory performance in the blind

Amir Amedi; Noa Raz; Pazit Pianka; Rafael Malach; Ehud Zohary

The visual cortex may be more modifiable than previously considered. Using functional magnetic resonance imaging (fMRI) in ten congenitally blind human participants, we found robust occipital activation during a verbal-memory task (in the absence of any sensory input), as well as during verb generation and Braille reading. We also found evidence for reorganization and specialization of the occipital cortex, along the anterior–posterior axis. Whereas anterior regions showed preference for Braille, posterior regions (including V1) showed preference for verbal-memory and verb generation (which both require memory of verbal material). No such occipital activation was found in sighted subjects. This difference between the groups was mirrored by superior performance of the blind in various verbal-memory tasks. Moreover, the magnitude of V1 activation during the verbal-memory condition was highly correlated with the blind individuals abilities in a variety of verbal-memory tests, suggesting that the additional occipital activation may have a functional role.


Experimental Brain Research | 2005

Functional imaging of human crossmodal identification and object recognition

Amir Amedi; K. von Kriegstein; N.M. van Atteveldt; Michael S. Beauchamp; M.J. Naumer

The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality.


Nature Neuroscience | 2007

Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex.

Amir Amedi; William M. Stern; Joan A. Camprodon; Felix Bermpohl; Lotfi B. Merabet; Stephen R. Rotman; Christopher Hemond; Peter B. L. Meijer; Alvaro Pascual-Leone

The lateral-occipital tactile-visual area (LOtv) is activated when objects are recognized by vision or touch. We report here that the LOtv is also activated in sighted and blind humans who recognize objects by extracting shape information from visual-to-auditory sensory substitution soundscapes. Recognizing objects by their typical sounds or learning to associate specific soundscapes with specific objects do not activate this region. This suggests that LOtv is driven by the presence of shape information.


Current Biology | 2011

A ventral visual stream reading center independent of visual experience

Lior Reich; Marcin Szwed; Laurent Cohen; Amir Amedi

The visual word form area (VWFA) is a ventral stream visual area that develops expertise for visual reading. It is activated across writing systems and scripts and encodes letter strings irrespective of case, font, or location in the visual field with striking anatomical reproducibility across individuals. In the blind, comparable reading expertise can be achieved using Braille. This study investigated which area plays the role of the VWFA in the blind. One would expect this area to be at either parietal or bilateral occipital cortex, reflecting the tactile nature of the task and crossmodal plasticity, respectively. However, according to the metamodal theory, which suggests that brain areas are responsive to a specific representation or computation regardless of their input sensory modality, we predicted recruitment of the left-hemispheric VWFA, identically to the sighted. Using functional magnetic resonance imaging, we show that activation during Braille reading in blind individuals peaks in the VWFA, with striking anatomical consistency within and between blind and sighted. Furthermore, the VWFA is reading selective when contrasted to high-level language and low-level sensory controls. Thus, we propose that the VWFA is a metamodal reading area that develops specialization for reading regardless of visual experience.


Nature Reviews Neuroscience | 2005

What blindness can tell us about seeing again: merging neuroplasticity and neuroprostheses

Lotfi B. Merabet; Joseph F. Rizzo; Amir Amedi; David C. Somers; Alvaro Pascual-Leone

Significant progress has been made in the development of visual neuroprostheses to restore vision in blind individuals. Appropriate delivery of electrical stimulation to intact visual structures can evoke patterned sensations of light in those who have been blind for many years. However, success in developing functional visual prostheses requires an understanding of how to communicate effectively with the visually deprived brain in order to merge what is perceived visually with what is generated electrically.


Brain Topography | 2009

A Putative Model of Multisensory Object Representation

Simon Lacey; Noa Tal; Amir Amedi; K. Sathian

This review surveys the recent literature on visuo-haptic convergence in the perception of object form, with particular reference to the lateral occipital complex (LOC) and the intraparietal sulcus (IPS) and discusses how visual imagery or multisensory representations might underlie this convergence. Drawing on a recent distinction between object- and spatially-based visual imagery, we propose a putative model in which LOtv, a subregion of LOC, contains a modality-independent representation of geometric shape that can be accessed either bottom-up from direct sensory inputs or top-down from frontoparietal regions. We suggest that such access is modulated by object familiarity: spatial imagery may be more important for unfamiliar objects and involve IPS foci in facilitating somatosensory inputs to the LOC; by contrast, object imagery may be more critical for familiar objects, being reflected in prefrontal drive to the LOC.


NeuroImage | 2006

Dissociable networks for the expectancy and perception of emotional stimuli in the human brain

Felix Bermpohl; Alvaro Pascual-Leone; Amir Amedi; Lotfi B. Merabet; Felipe Fregni; Nadine Gaab; David C. Alsop; Gottfried Schlaug; Georg Northoff

William James posited that comparable brain regions were implicated in the anticipation and perception of a stimulus; however, dissociable networks (at least in part) may also underlie these processes. Recent functional neuroimaging studies have addressed this issue by comparing brain systems associated with the expectancy and perception of visual, tactile, nociceptive, and reward stimuli. In the present fMRI study, we addressed this issue in the domain of pictorial emotional stimuli (IAPS). Our paradigm involved the experimental conditions emotional expectancy, neutral expectancy, emotional picture perception, and neutral picture perception. Specifically, the emotional expectancy cue was uncertain in that it did not provide additional information regarding the positive or negative valence of the subsequent picture. Neutral expectancy and neutral picture perception served as control conditions, allowing the identification of expectancy and perception effects specific for emotion processing. To avoid contamination of the perception conditions by the preceding expectancy periods, 50% of the pictorial stimuli were presented without preceding expectancy cues. We found that the emotional expectancy cue specifically produced activation in the supracallosal anterior cingulate, cingulate motor area, and parieto-occipital sulcus. These regions were not significantly activated by emotional picture perception which recruited a different neuronal network, including the amygdala, insula, medial and lateral prefrontal cortex, cerebellum, and occipitotemporal areas. This dissociation may reflect a distinction between anticipatory and perceptive components of emotional stimulus processing.


Restorative Neurology and Neuroscience | 2010

Cortical activity during tactile exploration of objects in blind and sighted humans

Amir Amedi; Noa Raz; Haim Azulay; Rafael Malach; Ehud Zohary

PURPOSE Recent studies show evidence of multisensory representation in the functionally normal visual cortex, but this idea remains controversial. Occipital cortex activation is often claimed to be a reflection of mental visual imagery processes triggered by other modalities. However, if the occipital cortex is genuinely active during touch, this might be the basis for the massive cross-modal plasticity observed in the congenitally blind. METHODS To address these issues, we used fMRI to compare patterns of activation evoked by a tactile object recognition (TOR) task (right or left hand) in 8 sighted and 8 congenitally blind subjects, with several other control tasks. RESULTS TOR robustly activated object selective regions in the lateral occipital complex (LOC/LOtv) in the blind (similar to the patterns of activation found in the sighted), indicating that object identification per se (i.e. in the absence of visual imagery) is sufficient to evoke responses in the LOC/LOtv. Importantly, there was negligible occipital activation for hand movements (imitating object palpations) in the occipital cortex, in both groups. Moreover, in both groups, TOR activation in the LOC/LOtv was bilateral, regardless of the palpating hand (similar to the lack of strong visual field preference in the LOC/LOtv for viewed objects). Finally, the most prominent enhancement in TOR activation in the congenitally blind (compared to their sighted peers) was found in the posterior occipital cortex. CONCLUSIONS These findings suggest that visual imagery is not an obligatory condition for object activation in visual cortex. It also demonstrates the massive plasticity in visual cortex of the blind for tactile object recognition that involves both the ventral and dorsal occipital areas, probably to support the high demand for this function in the blind.


Experimental Brain Research | 2009

Multisensory visual-tactile object related network in humans: insights gained using a novel crossmodal adaptation approach

Noa Tal; Amir Amedi

Neuroimaging techniques have provided ample evidence for multisensory integration in humans. However, it is not clear whether this integration occurs at the neuronal level or whether it reflects areal convergence without such integration. To examine this issue as regards visuo-tactile object integration we used the repetition suppression effect, also known as the fMRI-based adaptation paradigm (fMR-A). Under some assumptions, fMR-A can tag specific neuronal populations within an area and investigate their characteristics. This technique has been used extensively in unisensory studies. Here we applied it for the first time to study multisensory integration and identified a network of occipital (LOtv and calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in humans and highlight the power of using fMR-A to study multisensory integration using non-invasinve neuroimaging techniques.

Collaboration


Dive into the Amir Amedi's collaboration.

Top Co-Authors

Avatar

Shachar Maidenbaum

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shelly Levy-Tzedek

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Alvaro Pascual-Leone

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Lotfi B. Merabet

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uri Hertz

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Galit Buchs

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Sami Abboud

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Lior Reich

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge