Elana Zion-Golumbic
Hebrew University of Jerusalem
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elana Zion-Golumbic.
NeuroImage | 2007
David Anaki; Elana Zion-Golumbic; Shlomo Bentin
Despite ample explorations the nature of neural mechanisms underlying human expertise in face perception is still undetermined. Here we examined the response of two electrophysiological signals, the N170 ERP and induced gamma-band activity (>20 Hz), to face orientation and familiarity across two blocks, one in which the face identity was task-relevant and one in which it was not. N170 amplitude to inverted faces was higher than to upright faces and was not influenced by face familiarity or its task relevancy. In contrast, induced gamma activity was higher for upright than for inverted faces and for familiar than unfamiliar faces. The effect of face inversion was found in lower gamma frequency band (25-50 Hz), whereas familiarity affected amplitudes in higher gamma frequency band (50-70 Hz). For gamma, the relevance of face identity to the task modulated both inversion and familiarity effects. These findings pinpoint three functionally dissociated neural mechanisms involved in face processing, namely, detection, configural analysis, and recognition.
Journal of Cognitive Neuroscience | 2010
Elana Zion-Golumbic; Marta Kutas; Shlomo Bentin
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident in several EEG frequency bands: theta (4–8 Hz), alpha (9–13 Hz), and gamma (40–100 Hz). Activity in these bands was differentially modulated by preexisting semantic knowledge and by episodic memory, implicating their different functional roles in memory. More specifically, theta activity and alpha suppression were larger for old compared to new faces at test regardless of fame, but were both larger for famous faces during study. This pattern of selective semantic effects suggests that the theta and alpha responses, which are primarily associated with episodic memory, reflect utilization of semantic information only when it is beneficial for task performance. In contrast, gamma activity decreased between the first (study) and second (test) presentation of a face, but overall was larger for famous than nonfamous faces. Hence, the gamma rhythm seems to be primarily related to activation of preexisting neural representations that may contribute to the formation of new episodic traces. Taken together, these data provide new insights into the complex interaction between semantic and episodic memory for faces and the neural dynamics associated with mnemonic processes.
NeuroImage | 2008
Elana Zion-Golumbic; Tal Golan; David Anaki; Shlomo Bentin
Previous studies demonstrated that induced EEG activity in the gamma band (iGBA) plays an important role in object recognition and is modulated by stimulus familiarity and its compatibility with pre-existent representations. In the present study we investigated the modulation of iGBA by the degree of familiarity and perceptual expertise that observers have with stimuli from different categories. Specifically, we compared iGBA in response to human faces versus stimuli which subjects are not expert with (ape faces, human hands, buildings and watches). iGBA elicited by human faces was higher and peaked earlier than that elicited by all other categories, which did not differ significantly from each other. These findings can be accounted for by two characteristics of perceptual expertise. One is the activation of a richer, stronger and, therefore, more easily accessible mental representation of human faces. The second is the more detailed perceptual processing necessary for within-category distinctions, which is the hallmark of perceptual expertise. In addition, the sensitivity of iGBA to human but not ape faces was contrasted with the face-sensitive N170-effect, which was similar for human and ape faces. In concert with previous studies, this dissociation suggests a multi-level neuronal model of face recognition, manifested by these two electrophysiological measures, discussed in this paper.
Human Brain Mapping | 2013
Zaifeng Gao; Abraham Goldstein; Yuval Harpaz; Myriam Hansel; Elana Zion-Golumbic; Shlomo Bentin
EEG studies suggested that the N170 ERP and Gamma‐band responses to faces reflect early and later stages of a multiple‐level face‐perception mechanism, respectively. However, these conclusions should be considered cautiously because EEG‐recorded Gamma may be contaminated by noncephalic activity such as microsaccades. Moreover, EEG studies of Gamma cannot easily reveal its intracranial sources. Here we recorded MEG rather than EEG, assessed the sources of the M170 and Gamma oscillations using beamformer, and explored the sensitivity of these neural manifestations to global, featural and configural information in faces. The M170 was larger in response to faces and face components than in response to watches. Scrambling the configuration of the inner components of the face even if presented without the face contour reduced and delayed the M170. The amplitude of MEG Gamma oscillations (30–70 Hz) was higher than baseline during an epoch between 230–570 ms from stimulus onset and was particularly sensitive to the configuration of the stimuli, regardless of their category. However, in the lower part of this frequency range (30–40 Hz) only physiognomic stimuli elevated the MEG above baseline. Both the M170 and Gamma were generated in a posterior‐ventral network including the fusiform, inferior‐occipital and lingual gyri, all in the right hemisphere. The generation of Gamma involved additional sources in the visual system, bilaterally. We suggest that the evoked M170 manifests a face‐perception mechanism based on the global characteristics of face, whereas the induced Gamma oscillations are associated with the integration of visual input into a pre‐existent coherent perceptual representation. Hum Brain Mapp, 2013.
Neuropsychologia | 2014
Sanne ten Oever; Charles E. Schroeder; David Poeppel; Nienke van Atteveldt; Elana Zion-Golumbic
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally.
Trends in Cognitive Sciences | 2012
Elana Zion-Golumbic; Charles E. Schroeder
Recent findings by Mesgarani and Chang demonstrate that signals in auditory cortex can reconstruct the spectrotemporal patterns of attended speech tokens better than those of ignored ones. These results help extend the study of attention into the domain of natural speech, posing numerous questions and challenges for future research.
Human Brain Mapping | 2015
Niv Noy; Stephan Bickel; Elana Zion-Golumbic; Michal Harel; Tal Golan; Ido Davidesco; Catherine A. Schevon; Guy M. McKhann; Robert R. Goodman; Charles E. Schroeder; Ashesh D. Mehta; Rafael Malach
Despite an extensive body of work, it is still not clear how short term maintenance of information is implemented in the human brain. Most prior research has focused on “working memory”—typically involving the storage of a number of items, requiring the use of a phonological loop and focused attention during the delay period between encoding and retrieval. These studies largely support a model of enhanced activity in the delay interval as the central mechanism underlying working memory. However, multi‐item working memory constitutes only a subset of storage phenomena that may occur during daily life. A common task in naturalistic situations is short term memory of a single item—for example, blindly reaching to a previously placed cup of coffee. Little is known about such single‐item, effortless, storage in the human brain. Here, we examined the dynamics of brain responses during a single‐item maintenance task, using intracranial recordings implanted for clinical purpose in patients (ECoG). Our results reveal that active electrodes were dominated by transient short latency visual and motor responses, reflected in broadband high frequency power increases in occipito‐temporal, frontal, and parietal cortex. Only a very small set of electrodes showed activity during the early part of the delay period. Interestingly, no cortical site displayed a significant activation lasting to the response time. These results suggest that single item encoding is characterized by transient high frequency ECoG responses, while the maintenance of information during the delay period may be mediated by mechanisms necessitating only low‐levels of neuronal activations. Hum Brain Mapp 36:3988–4003, 2015.
Frontiers in Psychology | 2015
Nienke van Atteveldt; Gabriella Musacchia; Elana Zion-Golumbic; Pejman Sehatpour; Daniel C. Javitt; Charles E. Schroeder
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts.
Multisensory Research | 2013
Sanne ten Oever; Charles E. Schroeder; David Poeppel; Nienke van Atteveldt; Elana Zion-Golumbic
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we compared the influence of two types of predictive temporal information on auditory perception: (1) intrinsic temporal rhythmicity of an auditory stimulus stream and (2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality would improve auditory detection, beyond the advantage provided by each information source alone. We presented streams of tones at either increasing or decreasing intensities until participants reported that they could hear/no longer hear the tones. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual flash in half of the trials. We show that detection thresholds are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, and that this effect is additive for rhythmic audiovisual presentation in both paradigms. These behavioral results suggest that both types of temporal information are used in parallel to prepare the perceptual system for upcoming stimuli and to optimally interact with the environment. Our findings underscore the flexibility and proactivity of the perceptual system which uses these temporal contextual factors combined to anticipate upcoming events and process them optimally.
Neuropsychologia | 2016
Sanne ten Oever; Charles E. Schroeder; David Poeppel; Nienke van Atteveldt; Elana Zion-Golumbic
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publishers website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.