Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masoud Ghodrati is active.

Publication


Featured researches published by Masoud Ghodrati.


Scientific Reports | 2016

Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition.

Saeed Reza Kheradpisheh; Masoud Ghodrati; Mohammad Ganjtabesh; Timothée Masquelier

Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.


PLOS ONE | 2012

How can selection of biologically inspired features improve the performance of a robust object recognition model

Masoud Ghodrati; Seyed-Mahdi Khaligh-Razavi; Reza Ebrahimpour; Karim Rajaei; Mohammad Pooyan

Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.


Frontiers in Psychology | 2015

The (un)suitability of modern liquid crystal displays (LCDs) for vision research

Masoud Ghodrati; Adam P. Morris; Nicholas Seow Chiang Price

Psychophysical and physiological studies of vision have traditionally used cathode ray tube (CRT) monitors to present stimuli. These monitors are no longer easily available, and liquid crystal display (LCD) technology is continually improving; therefore, we characterized a number of LCD monitors to determine if newer models are suitable replacements for CRTs in the laboratory. We compared the spatial and temporal characteristics of a CRT with five LCDs, including monitors designed with vision science in mind (ViewPixx and Display++), “prosumer” gaming monitors, and a consumer-grade LCD. All monitors had sufficient contrast, luminance range and reliability to support basic vision experiments with static images. However, the luminance of all LCDs depended strongly on viewing angle, which in combination with the poor spatial uniformity of all monitors except the VPixx, caused up to 80% drops in effective luminance in the periphery during central fixation. Further, all monitors showed significant spatial dependence, as the luminance of one area was modulated by the luminance of other areas. These spatial imperfections are most pronounced for experiments that use large or peripheral visual stimuli. In the temporal domain, the gaming LCDs were unable to generate reliable luminance patterns; one was unable to reach the requested luminance within a single frame whereas in the other the luminance of one frame affected the luminance of the next frame. The VPixx and Display++ were less affected by these problems, and had good temporal properties provided stimuli were presented for 2 or more frames. Of the consumer-grade and gaming displays tested, and if problems with spatial uniformity are taken into account, the Eizo FG2421 is the most suitable alternative to CRTs. The specialized ViewPixx performed best among all the tested LCDs, followed closely by the Display++; both are good replacements for a CRT, provided their spatial imperfections are considered.


PLOS ONE | 2012

A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

Karim Rajaei; Seyed-Mahdi Khaligh-Razavi; Masoud Ghodrati; Reza Ebrahimpour; Mohammad Ebrahim Shiri Ahmad Abadi

The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.


Frontiers in Computational Neuroscience | 2014

The importance of visual features in generic vs. specialized object recognition: a computational study

Masoud Ghodrati; Karim Rajaei; Reza Ebrahimpour

It is debated whether the representation of objects in inferior temporal (IT) cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human) processes information related to specialized object recognition (here we say within category object recognition such as face identification). Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g., face vs. non-face objects) vs. within category object recognition (e.g., two different faces). In this study, we address this question with computational simulations. We use two biologically inspired object recognition models and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object recognition requires a distributed universal dictionary of visual features in which the size of features does not have significant difference.


Scientific Reports | 2016

A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

Amirhossein Farzmahdi; Karim Rajaei; Masoud Ghodrati; Reza Ebrahimpour; Seyed-Mahdi Khaligh-Razavi

Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.


Vision Research | 2013

Predicting the human reaction time based on natural image statistics in a rapid categorization task

Amin Mirzaei; Seyed-Mahdi Khaligh-Razavi; Masoud Ghodrati; Sajjad Zabbah; Reza Ebrahimpour

The human visual system is developed by viewing natural scenes. In controlled experiments, natural stimuli therefore provide a realistic framework with which to study the underlying information processing steps involved in human vision. Studying the properties of natural images and their effects on the visual processing can help us to understand underlying mechanisms of visual system. In this study, we used a rapid animal vs. non-animal categorization task to assess the relationship between the reaction times of human subjects and the statistical properties of images. We demonstrated that statistical measures, such as the beta and gamma parameters of a Weibull, fitted to the edge histogram of an image, and the image entropy, are effective predictors of subject reaction times. Using these three parameters, we proposed a computational model capable of predicting the reaction times of human subjects.


European Journal of Neuroscience | 2016

Orientation selectivity in rat primary visual cortex emerges earlier with low-contrast and high-luminance stimuli.

Masoud Ghodrati; Dasuni S. Alwis; Nicholas S. C. Price

In natural vision, rapid and sustained variations in luminance and contrast change the reliability of information available about a visual scene, and markedly affect both neuronal and behavioural responses. The hallmark property of neurons in primary visual cortex (V1), orientation selectivity, is unaffected by changes in stimulus contrast, but it remains unclear how sustained differences in mean luminance and contrast affect the time‐course of orientation selectivity, and the amount of information that neurons carry about orientation. We used reverse correlation with characterize the temporal dynamics of orientation selectivity in rat V1 neurons under four luminance‐contrast conditions. We show that orientation selectivity and mutual information between neuronal responses and stimulus orientation are invariant to contrast or mean luminance. Critically, the time‐course of the emergence of orientation selectivity was affected by both factors; response latencies were longer for low‐ than high‐luminance gratings, and surprisingly, response latencies were also longer for high‐ than low‐contrast gratings. Modelling suggests that luminance‐modulated changes in feedforward gain, in combination with hyperpolarization caused by high contrasts can account for our physiological data. The hyperpolarization at high contrasts may increase signal‐to‐noise ratios, whereas a more depolarized membrane may lead to greater sensitivity to weak stimuli.


Frontiers in Computational Neuroscience | 2014

Feedforward object-vision models only tolerate small image variations compared to human.

Masoud Ghodrati; Amirhossein Farzmahdi; Karim Rajaei; Reza Ebrahimpour; Seyed-Mahdi Khaligh-Razavi


arXiv: Neurons and Cognition | 2015

A specialized face-processing network consistent with the representational geometry of monkey face patches.

Amirhossein Farzmahdi; Karim Rajaei; Masoud Ghodrati; Reza Ebrahimpour; Seyed-Mahdi Khaligh-Razavi

Collaboration


Dive into the Masoud Ghodrati's collaboration.

Top Co-Authors

Avatar

Seyed-Mahdi Khaligh-Razavi

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amirhossein Tavanaei

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar

Anthony S. Maida

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge