Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moran Cerf is active.

Publication


Featured researches published by Moran Cerf.


Journal of Vision | 2009

Faces and text attract gaze independent of the task: Experimental data and computer model

Moran Cerf; E. Paxon Frady; Christof Koch

Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced models predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.


The Journal of Neuroscience | 2008

Latency and selectivity of single neurons indicate hierarchical processing in the human medial temporal lobe

Florian Mormann; Simon Kornblith; Rodrigo Quian Quiroga; Alexander Kraskov; Moran Cerf; Itzhak Fried; Christof Koch

Neurons in the temporal lobe of both monkeys and humans show selective responses to classes of visual stimuli and even to specific individuals. In this study, we investigate the latency and selectivity of visually responsive neurons recorded from microelectrodes in the parahippocampal cortex, entorhinal cortex, hippocampus, and amygdala of human subjects during a visual object presentation task. During 96 experimental sessions in 35 subjects, we recorded from a total of 3278 neurons. Of these units, 398 responded selectively to one or more of the presented stimuli. Mean response latencies were substantially larger than those reported in monkeys. We observed a highly significant correlation between the latency and the selectivity of these neurons: the longer the latency the greater the selectivity. Particularly, parahippocampal neurons were found to respond significantly earlier and less selectively than those in the other three regions. Regional analysis showed significant correlations between latency and selectivity within the parahippocampal cortex, entorhinal cortex, and hippocampus, but not within the amygdala. The later and more selective responses tended to be generated by cells with sparse baseline firing rates and vice versa. Our results provide direct evidence for hierarchical processing of sensory information at the interface between the visual pathway and the limbic system, by which increasingly refined and specific representations of stimulus identity are generated over time along the anatomic pathways of the medial temporal lobe.


Nature | 2010

On-line, voluntary control of human temporal lobe neurons.

Moran Cerf; Nikhil Thiruvengadam; Florian Mormann; Alexander Kraskov; Rodrigo Quian Quiroga; Christof Koch; Itzhak Fried

Daily life continually confronts us with an exuberance of external, sensory stimuli competing with a rich stream of internal deliberations, plans and ruminations. The brain must select one or more of these for further processing. How this competition is resolved across multiple sensory and cognitive regions is not known; nor is it clear how internal thoughts and attention regulate this competition. Recording from single neurons in patients implanted with intracranial electrodes for clinical reasons, here we demonstrate that humans can regulate the activity of their neurons in the medial temporal lobe (MTL) to alter the outcome of the contest between external images and their internal representation. Subjects looked at a hybrid superposition of two images representing familiar individuals, landmarks, objects or animals and had to enhance one image at the expense of the other, competing one. Simultaneously, the spiking activity of their MTL neurons in different subregions and hemispheres was decoded in real time to control the content of the hybrid. Subjects reliably regulated, often on the first trial, the firing rate of their neurons, increasing the rate of some while simultaneously decreasing the rate of others. They did so by focusing onto one image, which gradually became clearer on the computer screen in front of their eyes, and thereby overriding sensory input. On the basis of the firing of these MTL neurons, the dynamics of the competition between visual images in the subject’s mind was visualized on an external display.


Nature Neuroscience | 2011

A category-specific response to animals in the right human amygdala

Florian Mormann; Julien Dubois; Simon Kornblith; Milica Milosavljevic; Moran Cerf; Matias J. Ison; Naotsugu Tsuchiya; Alexander Kraskov; Rodrigo Quian Quiroga; Ralph Adolphs; Itzhak Fried; Christof Koch

The amygdala is important in emotion, but it remains unknown whether it is specialized for certain stimulus categories. We analyzed responses recorded from 489 single neurons in the amygdalae of 41 neurosurgical patients and found a categorical selectivity for pictures of animals in the right amygdala. This selectivity appeared to be independent of emotional valence or arousal and may reflect the importance that animals held throughout our evolutionary past.


International Journal of Advertising | 2008

First attention then intention: Insights from computational neuroscience of vision

Milica Milosavljevic; Moran Cerf

Attention is a critical construct for anyone involved in marketing. However, research on attention is currently lacking in the marketing discipline. This is perhaps due to inherent difficulties in measuring attention. The current paper accentuates the importance of better understanding attention, and suggests studying attention as a two-component construct consisting of equally important bottom-up and top-down processes. While research on top-down attention has recently been undertaken by Pieters and Wedel (2004; 2007), the current paper introduces the field of computational neuroscience and its research on visual attention as a useful framework for studying bottom-up attention.


Journal of Neurophysiology | 2011

Selectivity of pyramidal cells and interneurons in the human medial temporal lobe

Matias J. Ison; Florian Mormann; Moran Cerf; Christof Koch; Itzhak Fried; Rodrigo Quian Quiroga

Neurons in the medial temporal lobe (MTL) respond selectively to pictures of specific individuals, objects, and places. However, the underlying mechanisms leading to such degree of stimulus selectivity are largely unknown. A necessary step to move forward in this direction involves the identification and characterization of the different neuron types present in MTL circuitry. We show that putative principal cells recorded in vivo from the human MTL are more selective than putative interneurons. Furthermore, we report that putative hippocampal pyramidal cells exhibit the highest degree of selectivity within the MTL, reflecting the hierarchical processing of visual information. We interpret these differences in selectivity as a plausible mechanism for generating sparse responses.


Journal of Neurophysiology | 2010

Responses of human medial temporal lobe neurons are modulated by stimulus repetition.

Carlos Pedreira; Florian Mormann; Alexander Kraskov; Moran Cerf; Itzhak Fried; Christof Koch; Rodrigo Quian Quiroga

Recent studies have reported the presence of single neurons with strong responses to visual inputs in the human medial temporal lobe. Here we show how repeated stimulus presentation--photos of celebrities and familiar individuals, landmark buildings, animals, and objects--modulates the firing rate of these cells: a consistent decrease in the neural activity was registered as images were repeatedly shown during experimental sessions. The effect of repeated stimulus presentation was not the same for all medial temporal lobe areas. These findings are consistent with the view that medial temporal lobe neurons link visual percepts to declarative memory.


Social Neuroscience | 2011

Comparing social attention in autism and amygdala lesions: Effects of stimulus and task condition

Elina Birmingham; Moran Cerf; Ralph Adolphs

The amygdala plays a critical role in orienting gaze and attention to socially salient stimuli. Previous work has demonstrated that SM a patient with rare bilateral amygdala lesions, fails to fixate and make use of information from the eyes in faces. Amygdala dysfunction has also been implicated as a contributing factor in autism spectrum disorders (ASD), consistent with some reports of reduced eye fixations in ASD. Yet, detailed comparisons between ASD and patients with amygdala lesions have not been undertaken. Here we carried out such a comparison, using eye tracking to complex social scenes that contained faces. We presented participants with three task conditions. In the Neutral task, participants had to determine what kind of room the scene took place in. In the Describe task, participants described the scene. In the Social Attention task, participants inferred where people in the scene were directing their attention. SM spent less time looking at the eyes and much more time looking at the mouths than control subjects, consistent with earlier findings. There was also a trend for the ASD group to spend less time on the eyes, although this depended on the particular image and task. Whereas controls and SM looked more at the eyes when the task required social attention, the ASD group did not. This pattern of impairments suggests that SM looks less at the eyes because of a failure in stimulus-driven attention to social features, whereas individuals with ASD look less at the eyes because they are generally insensitive to socially relevant information and fail to modulate attention as a function of task demands. We conclude that the source of the social attention impairment in ASD may arise upstream from the amygdala, rather than in the amygdala itself.


Attention in Cognitive Systems | 2009

Decoding What People See from Where They Look: Predicting Visual Stimuli from Scanpaths

Moran Cerf; Jonathan Harel; Alex Huth; Wolfgang Einhäuser; Christof Koch

Saliency algorithms are applied to correlate with the overt attentional shifts, corresponding to eye movements, made by observers viewing an image. In this study, we investigated if saliency maps could be used to predict which image observers were viewing given only scanpath data. The results were strong: in an experiment with 441 trials, each consisting of 2 images with scanpath data - pooled over 9 subjects - belonging to one unknown image in the set, in 304 trials (69%) the correct image was selected, a fraction significantly above chance, but much lower than the correctness rate achieved using scanpaths from individual subjects, which was 82.4%. This leads us to propose a new metric for quantifying the importance of saliency map features, based on discriminability between images, as well as a new method for comparing present saliency map efficacy metrics. This has potential application for other kinds of predictions, e.g., categories of image content, or even subject class.


Vision Research | 2007

Observers are consistent when rating image conspicuity

Moran Cerf; Daniel Cleary; Robert J. Peters; Wolfgang Einhäuser; Christof Koch

Human perception of an images conspicuity depends on the stimulus itself and the observers semantic interpretation. We investigated the relative contribution of the former, sensory-driven, component. Participants viewed sequences of images from five different classes-fractals, overhead satellite imagery, grayscale and colored natural scenes, and magazine covers-and graded each numerically according to its perceived conspicuity. We found significant consistency in this rating within and between observers for all image categories. In a subsequent recognition memory test, performance was significantly above chance for all categories, with the weakest memory for satellite imagery, and reaching near ceiling for magazine covers. When repeating the experiment after one year, ratings remained consistent within each observer and category, despite the absence of explicit scene memory. Our findings suggest that the rating of image conspicuity is driven by image-immanent, sensory factors common to all observers.

Collaboration


Dive into the Moran Cerf's collaboration.

Top Co-Authors

Avatar

Christof Koch

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Florian Mormann

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Itzhak Fried

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Itzhak Fried

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge