Dan Nemrodov
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Nemrodov.
NeuroImage | 2014
Dan Nemrodov; Thomas Anderson; Frank F. Preston; Roxane J. Itier
Eyes are central to face processing however their role in early face encoding as reflected by the N170 ERP component is unclear. Using eye tracking to enforce fixation on specific facial features, we found that the N170 was larger for fixation on the eyes compared to fixation on the forehead, nasion, nose or mouth, which all yielded similar amplitudes. This eye sensitivity was seen in both upright and inverted faces and was lost in eyeless faces, demonstrating it was due to the presence of eyes at fovea. Upright eyeless faces elicited largest N170 at nose fixation. Importantly, the N170 face inversion effect (FIE) was strongly attenuated in eyeless faces when fixation was on the eyes but was less attenuated for nose fixation and was normal when fixation was on the mouth. These results suggest the impact of eye removal on the N170 FIE is a function of the angular distance between the fixated feature and the eye location. We propose the Lateral Inhibition, Face Template and Eye Detector based (LIFTED) model which accounts for all the present N170 results including the FIE and its interaction with eye removal. Although eyes elicit the largest N170 response, reflecting the activity of an eye detector, the processing of upright faces is holistic and entails an inhibitory mechanism from neurons coding parafoveal information onto neurons coding foveal information. The LIFTED model provides a neuronal account of holistic and featural processing involved in upright and inverted faces and offers precise predictions for further testing.
British Journal of Psychology | 2011
Dan Nemrodov; Roxane J. Itier
The current study employed a rapid adaptation procedure to test the neuronal mechanisms of the face inversion effect (FIE) on the early face-sensitive event-related potential (ERP) component N170. Five categories of face-related stimuli (isolated eyes, isolated mouths, eyeless faces, mouthless faces, and full faces) and houses were presented in upright and inverted orientations as adaptors for inverted full face test stimuli. Strong adaptation was found for all face-related stimuli except mouths. The adaptation effect was larger for inverted than upright stimuli, but only when eyes were present. These results underline an important role of eyes in early face processing. A mechanism of eye-dependent orientation sensitivity during the structural encoding stage of faces is proposed.
NeuroImage | 2012
Dan Nemrodov; Roxane J. Itier
Rapid adaptation is an adaptation procedure in which adaptors and test stimuli are presented in rapid succession. The current study tested the validity of this method for early ERP components by investigating the specificity of the adaptation effect on the face-sensitive N170 ERP component across multiple test stimuli. Experiments 1 and 2 showed identical response patterns for house and upright face test stimuli using the same adaptor stimuli. The results were also identical to those reported in a previous study using inverted face test stimuli (Nemrodov and Itier, 2011). In Experiment 3 all possible adaptor-test combinations between upright face, house, chair and car stimuli were used and no interaction between adaptor and test category, expected in the case of test-specific adaptation, was found. These results demonstrate that the rapid adaptation paradigm does not produce category-specific adaptation effects around 170-200 ms following test stimulus onset, a necessary condition for the interpretation of adaptation results. These results suggest the rapid categorical adaptation paradigm does not work.
Accident Analysis & Prevention | 2008
Tova Rosenbloom; Dan Nemrodov; Adar Ben-Eliyahu; Ehud Eldror
Childrens actual performance of visual timing task is possibly deficient, and road-crossing training programs focusing on visual timing elements result in questionable improvement in performance. The present study focused on conceptual, rather than perceptual, examination of the visual timing elements of distance and speed, as integrated into appraisals of risks related to a traffic scenario. Preschool children, third-grade children and adults appraised pedestrian fear and danger associated with four scenarios conceptually depicted using a table-top model. Each scenario described either a child or an adult pedestrian approached by a vehicle at various distances (near/far) and speeds (slow/fast). Results suggest that whereas the adult subjects integrated the danger and fear appraisals by giving separate weights to both distance and speed concepts, preschoolers failed to properly realize the danger associated with speed, and third-graders failed to integrate both concepts in their appraisals. In addition, children seem to be unaware of their underprivileged pedestrian status compared to adult pedestrians, as evidenced by similar appraisal patterns for both pedestrian age groups. The safety implications of these findings are discussed.
NeuroImage | 2016
Dan Nemrodov; Matthias Niemeier; Jenkin Ngo Yin Mok; Adrian Nestor
An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition.
Brain and Language | 2011
Dan Nemrodov; Yuval Harpaz; Daniel C. Javitt; Michal Lavidor
This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle letters) into man-made and natural categories while their performance and ERPs were recorded. The stimuli were presented parafoveally to the right and left visual fields. As predicted by the Direct Access Model, ERP data showed that both the left hemisphere and right hemisphere were able to differentiate between words and non-words as early as 170 ms post-stimulus; these results were significant only for the contralaterally presented stimuli. The N1 component, which is considered to reflect orthographic processing, was larger in both hemispheres in response to the contralateral than the ipsilateral presented stimuli. This finding provides evidence for the RH capability to access higher level lexical information at the early stages of visual word recognition, thus lending weight to arguments for the relatively independent nature of this process.
Perceptual and Motor Skills | 2006
Tova Rosenbloom; Dan Nemrodov; Adar Ben Eliyahu
The main objective of the present study was to explore the yielding behavior of Israeli drivers. A series of observations were carried out at a busy crosswalk during rush hour to determine the association between demographic factors, i.e., the sex and age of both pedestrians and drivers and the rate of compliance with yielding regulations. The rate of yielding observed did not exceed 53%. Drivers within the 26–50 age range, excluding other age groups, tended to exhibit a higher yielding rate towards the members of their own age group.
Scientific Reports | 2017
Chi-Hsun Chang; Dan Nemrodov; Andy C. H. Lee; Adrian Nestor
Visual memory for faces has been extensively researched, especially regarding the main factors that influence face memorability. However, what we remember exactly about a face, namely, the pictorial content of visual memory, remains largely unclear. The current work aims to elucidate this issue by reconstructing face images from both perceptual and memory-based behavioural data. Specifically, our work builds upon and further validates the hypothesis that visual memory and perception share a common representational basis underlying facial identity recognition. To this end, we derived facial features directly from perceptual data and then used such features for image reconstruction separately from perception and memory data. Successful levels of reconstruction were achieved in both cases for newly-learned faces as well as for familiar faces retrieved from long-term memory. Theoretically, this work provides insights into the content of memory-based representations while, practically, it may open the path to novel applications, such as computer-based ‘sketch artists’.
NeuroImage | 2019
Dan Nemrodov; Marlene Behrmann; Matthias Niemeier; Natalia Drobotenko; Adrian Nestor
&NA; The significance of shape and surface information for face perception is well established, yet their relative contribution to recognition and their neural underpinnings await clarification. Here, we employ image reconstruction to retrieve, assess and visualize such information using behavioral, electroencephalography and functional magnetic resonance imaging data. Our results indicate that both shape and surface information can be successfully recovered from each modality but that the latter is better recovered than the former, consistent with its key role for face representations. Further, shape and surface information exhibit similar spatiotemporal profiles, rely on the extraction of specific visual features, such as eye shape or skin tone, and reveal a systematic representational structure, albeit with more cross‐modal consistency for shape than surface. More generally, the present work illustrates a novel approach to relating and comparing different modalities in terms of perceptual information content. Thus, our results help elucidate the representational basis of individual face recognition while, methodologically, they showcase the utility of image reconstruction and clarify its reliance on diagnostic visual information. HighlightsFace shape and surface information is recovered from behavioral, EEG and fMRI dataSurface information is recovered better than shape from empirical dataShape information is recovered more consistently across modalitiesShape and surface exhibit similar spatiotemporal profiles of neural processingEye shape and skin tone play key roles in individual face representation
eNeuro | 2018
Dan Nemrodov; Matthias Niemeier; Ashutosh Patel; Adrian Nestor
Abstract Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50–650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.