Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seyed-Mahdi Khaligh-Razavi is active.

Publication


Featured researches published by Seyed-Mahdi Khaligh-Razavi.


NeuroImage | 2015

Visual representations are dominated by intrinsic fluctuations correlated between areas

Linda Henriksson; Seyed-Mahdi Khaligh-Razavi; Kendrick Kay; Nikolaus Kriegeskorte

Intrinsic cortical dynamics are thought to underlie trial-to-trial variability of visually evoked responses in animal models. Understanding their function in the context of sensory processing and representation is a major current challenge. Here we report that intrinsic cortical dynamics strongly affect the representational geometry of a brain region, as reflected in response-pattern dissimilarities, and exaggerate the similarity of representations between brain regions. We characterized the representations in several human visual areas by representational dissimilarity matrices (RDMs) constructed from fMRI response-patterns for natural image stimuli. The RDMs of different visual areas were highly similar when the response-patterns were estimated on the basis of the same trials (sharing intrinsic cortical dynamics), and quite distinct when patterns were estimated on the basis of separate trials (sharing only the stimulus-driven component). We show that the greater similarity of the representational geometries can be explained by coherent fluctuations of regional-mean activation within visual cortex, reflecting intrinsic dynamics. Using separate trials to study stimulus-driven representations revealed clearer distinctions between the representational geometries: a Gabor wavelet pyramid model explained representational geometry in visual areas V1–3 and a categorical animate–inanimate model in the object-responsive lateral occipital cortex.


Progress in Neurobiology | 2017

Towards building a more complex view of the lateral geniculate nucleus: recent advances in understanding its role

Masoud Ghodrati; Seyed-Mahdi Khaligh-Razavi; Sidney R. Lehky

&NA; The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN. These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real‐world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high‐level visual processing. HighlightsRecent advances in understanding the role of LGN emphasize that it can no longer be considered a simple linear spatiotemporal filter.The LGN has a variety of complex nonlinear behaviors. Perhaps the most interesting nonlinearity involves feedback from cortex.Many current models of high‐level vision have ignored the role of LGN.We suggest further development of feedback models of high‐level vision considering LGN function.Predictive coding and expectation coding models are in this direction.


bioRxiv | 2018

Beyond Core Object Recognition: Recurrent processes account for object recognition under occlusion

Karim Rajaei; Yalda Mohsenzadeh; Reza Ebrahimpour; Seyed-Mahdi Khaligh-Razavi

Core object recognition, the ability to rapidly recognize objects despite variations in their appearance, is largely solved through the feedforward processing of visual information. Deep neural networks are shown to achieve human-level performance in these tasks, and explain the primate brain representation. On the other hand, object recognition under more challenging conditions (i.e. beyond the core recognition problem) is less characterized. One such example is object recognition under occlusion. It is unclear to what extent feedforward and recurrent processes contribute in object recognition under occlusion. Furthermore, we do not know whether the conventional deep neural networks, such as AlexNet, which were shown to be successful in solving core object recognition, can perform similarly well in problems that go beyond the core recognition. Here, we characterize neural dynamics of object recognition under occlusion, using magnetoencephalography (MEG), while participants were presented with images of objects with various levels of occlusion. We provide evidence from multivariate analysis of MEG data, behavioral data, and computational modelling, demonstrating an essential role for recurrent processes in object recognition under occlusion. Furthermore, the computational model with local recurrent connections, used here, suggests a mechanistic explanation of how the human brain might be solving this problem. Author Summary In recent years, deep-learning-based computer vision algorithms have been able to achieve human-level performance in several object recognition tasks. This has also contributed in our understanding of how our brain may be solving these recognition tasks. However, object recognition under more challenging conditions, such as occlusion, is less characterized. Temporal dynamics of object recognition under occlusion is largely unknown in the human brain. Furthermore, we do not know if the previously successful deep-learning algorithms can similarly achieve human-level performance in these more challenging object recognition tasks. By linking brain data with behavior, and computational modeling, we characterized temporal dynamics of object recognition under occlusion, and proposed a computational mechanism that explains both behavioral and the neural data in humans. This provides a plausible mechanistic explanation for how our brain might be solving object recognition under more challenging conditions.


bioRxiv | 2016

From what we perceive to what we remember: Characterizing representational dynamics of visual memorability

Seyed-Mahdi Khaligh-Razavi; Wilma A. Bainbridge; Dimitrios Pantazis; Aude Oliva

Not all visual memories are equal—some endure in our minds, while others quickly disappear. Recent behavioral work shows we can reliably predict which images will be remembered. This image property is called memorability. Memorability is intrinsic to an image, robust across observers, and unexplainable by low-level visual features. However, its neural bases and relation with perception and memory remain unknown. Here we characterize the representational dynamics of memorability using magnetoencephalography (MEG). We find memorability is indexed by brain responses starting at 218ms for faces and 371ms for scenes—later than classical early face/scene discrimination perceptual signals, yet earlier than the late memory encoding signal observed at ~700ms. The results show memorability is a high-level image property whose spatio-temporal neural dynamics are different from those of memory encoding. Together, this work brings new insights into the underlying neural processes of the transformation from what we perceive to what we remember.


Journal of Cognitive Neuroscience | 2018

Tracking the Spatiotemporal Neural Dynamics of Real-world Object Size and Animacy in the Human Brain

Seyed-Mahdi Khaligh-Razavi; Radoslaw Martin Cichy; Dimitrios Pantazis; Aude Oliva

Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG–fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.


Journal of Vision | 2016

Temporal dynamics of memorability: an intrinsic brain signal distinct from memory

Seyed-Mahdi Khaligh-Razavi; Wilma A. Bainbridge; Dimitrios Pantazis; Aude Oliva

Can we predict what people will remember, as they are perceiving an image? Recent work has identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten (Isola et al . 2011, 2014; Bainbridge et al . 2013) . Despite the separate subjective experiences people have, certain faces and scenes are consistently remembered and others forgotten, independent of observer . Whereas many studies have concentrated on an observer-centric predictor of memory (e .g . Kuhl et al . 2012), memorability is a complementary, stimulus-centric predictor, generalizable across observers and context. How is memorability manifested in the brain, and how does it differ from pure memory encoding? In this study we characterized temporal dynamics of memorability, and showed that magnetoencephalography (MEG) brain signals are predictive of memorability . We further showed that the neural signature of memorability exists for both faces and scenes; however each of them has its own specific temporal dynamics. Faces showed a persistent memorability signal whereas scenes had more transient characteristics . We also found that neural signatures of memorability across time are different from that of memory encoding, as measured by a post-MEG memory recognition task. This work is the first to measure memorability, as an innate property of images, from electrophysiological brain signals and characterize its temporal dynamics .


Journal of Vision | 2015

The effects of recurrent dynamics on ventral-stream representational geometry

Seyed-Mahdi Khaligh-Razavi; Johan D. Carlin; Radoslaw Martin Cichy; Nikolaus Kriegeskorte

Visual processing involves feedforward and recurrent signals. Understanding which computations are performed in the feedforward sweep and which require recurrent processing has been challenging. We used fMRI and MEG to characterize the spatial and temporal components of human visual object representations. In the fMRI experiment, we used brief stimulus presentation (16.7ms) and a backward masking paradigm with short and long interstimulus intervals (ISI) to distinguish the contributions of feedforward and recurrent processing. In the short-ISI trials, the mask was presented 37ms after stimulus onset (ISI=20ms), interfering with recurrent processing. In the long-ISI trials, the mask appears only 1017ms after stimulus onset (ISI=1000ms), leaving time for recurrent processing. Representations of a set of animate/inanimate object photos were characterised by their representational dissimilarity matrices (RDMs). We observed no change of the representational geometry with recurrent processing in early visual cortex (EVC). In human inferior temporal (hIT) cortex, however, the representation was transformed as a result of recurrent processing. Long-ISI trials (enabling more extended recurrent processing) were associated with stronger clustering of artificial inanimate objects and more prominent human-body clusters. By contrast, human faces were more clustered in the short-ISI trials. We also compared the fMRI RDMs with RDM movies computed from MEG sensor patterns. The MEG-to-fMRI RDM correlations for the long-ISI fMRI data peaked later (126ms) than for the short-ISI fMRI data (75ms), suggesting that computations occurring at longer latencies after stimulus onset actually contribute to the representational geometry observed with fMRI in long-ISI trials. The MEG results further suggested that the categorical divisions observed in hIT (e.g. animate vs. inanimate) emerge dynamically, with the latency of categoricality peaks suggesting a role for recurrent processing. Our study demonstrates that object representations in hIT evolve with recurrent processing in a way that strengthens categorical divisions in the representational geometry. Meeting abstract presented at VSS 2015.


Journal of Vision | 2018

Integrated Cognitive Assessment: Speed and Accuracy of Visual Processing as a Proxy to Cognitive Performance

Seyed-Mahdi Khaligh-Razavi; Sina Habibi; Elham Sadeghi; Chris Kalafatis


Journal of Vision | 2017

Combining human MEG and fMRI data reveals the spatio-temporal dynamics of animacy and real-world object size

Seyed-Mahdi Khaligh-Razavi; Radoslaw Martin Cichy; Dimitrios Pantazis; Aude Oliva


Archive | 2016

SYSTEM FOR ASSESSING A MENTAL HEALTH DISORDER

Seyed-Mahdi Khaligh-Razavi; Sina Habibi

Collaboration


Dive into the Seyed-Mahdi Khaligh-Razavi's collaboration.

Top Co-Authors

Avatar

Aude Oliva

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dimitrios Pantazis

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Kriegeskorte

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kendrick Kay

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Wilma A. Bainbridge

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sidney R. Lehky

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar

Yalda Mohsenzadeh

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Walther

Cognition and Brain Sciences Unit

View shared research outputs
Researchain Logo
Decentralizing Knowledge