Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcela Peña is active.

Publication


Featured researches published by Marcela Peña.


The Journal of Neuroscience | 2007

Synchronization of Neural Activity across Cortical Areas Correlates with Conscious Perception

Lucia Melloni; Carlos A. Molina; Marcela Peña; David Torres; Wolf Singer; Eugenio Rodriguez

Subliminal stimuli can be deeply processed and activate similar brain areas as consciously perceived stimuli. This raises the question which signatures of neural activity critically differentiate conscious from unconscious processing. Transient synchronization of neural activity has been proposed as a neural correlate of conscious perception. Here we test this proposal by comparing the electrophysiological responses related to the processing of visible and invisible words in a delayed matching to sample task. Both perceived and nonperceived words caused a similar increase of local (gamma) oscillations in the EEG, but only perceived words induced a transient long-distance synchronization of gamma oscillations across widely separated regions of the brain. After this transient period of temporal coordination, the electrographic signatures of conscious and unconscious processes continue to diverge. Only words reported as perceived induced (1) enhanced theta oscillations over frontal regions during the maintenance interval, (2) an increase of the P300 component of the event-related potential, and (3) an increase in power and phase synchrony of gamma oscillations before the anticipated presentation of the test word. We propose that the critical process mediating the access to conscious perception is the early transient global increase of phase synchrony of oscillatory activity in the gamma frequency range.


Nature Neuroscience | 2014

Neuroscience and education: prime time to build the bridge

Mariano Sigman; Marcela Peña; Andrea Paula Goldin; Sidarta Ribeiro

As neuroscience gains social traction and entices media attention, the notion that education has much to benefit from brain research becomes increasingly popular. However, it has been argued that the fundamental bridge toward education is cognitive psychology, not neuroscience. We discuss four specific cases in which neuroscience synergizes with other disciplines to serve education, ranging from very general physiological aspects of human learning such as nutrition, exercise and sleep, to brain architectures that shape the way we acquire language and reading, and neuroscience tools that increasingly allow the early detection of cognitive deficits, especially in preverbal infants. Neuroscience methods, tools and theoretical frameworks have broadened our understanding of the mind in a way that is highly relevant to educational practice. Although the bridges cement is still fresh, we argue why it is prime time to march over it.


The Journal of Neuroscience | 2012

Earlier speech exposure does not accelerate speech acquisition.

Marcela Peña; Janet F. Werker; Ghislaine Dehaene-Lambertz

Critical periods in language acquisition have been discussed primarily with reference to studies of people who are deaf or bilingual. Here, we provide evidence on the opening of sensitivity to the linguistic environment by studying the response to a change of phoneme at a native and nonnative phonetic boundary in full-term and preterm human infants using event-related potentials. Full-term infants show a decline in their discrimination of nonnative phonetic contrasts between 9 and 12 months of age. Because the womb is a high-frequency filter, many phonemes are strongly degraded in utero. Preterm infants thus benefit from earlier and richer exposure to broadcast speech. We find that preterms do not take advantage of this enriched linguistic environment: the decrease in amplitude of the mismatch response to a nonnative change of phoneme at the end of the first year of life was dependent on maturational age and not on the duration of exposure to broadcast speech. The shaping of phonological representations by the environment is thus strongly constrained by brain maturation factors.


NeuroImage | 2009

Investigating the neural correlates of continuous speech computation with frequency-tagged neuroelectric responses.

Marco Buiatti; Marcela Peña; Ghislaine Dehaene-Lambertz

In order to learn an oral language, humans have to discover words from a continuous signal. Streams of artificial monotonous speech can be readily segmented based on the statistical analysis of the syllables distribution. This parsing is considerably improved when acoustic cues, such as subliminal pauses, are added suggesting that a different mechanism is involved. Here we used a frequency-tagging approach to explore the neural mechanisms underlying word learning while listening to continuous speech. High-density EEG was recorded in adults listening to a concatenation of either random syllables or tri-syllabic artificial words, with or without subliminal pauses added every three syllables. Peaks in the EEG power spectrum at the frequencies of one and three syllables occurrence were used to tag the perception of a monosyllabic or tri-syllabic structure, respectively. Word streams elicited the suppression of a one-syllable frequency peak, steadily present during random streams, suggesting that syllables are no more perceived as isolated segments but bounded to adjacent syllables. Crucially, three-syllable frequency peaks were only observed during word streams with pauses, and were positively correlated to the explicit recall of the detected words. This result shows that pauses facilitate a fast, explicit and successful extraction of words from continuous speech, and that the frequency-tagging approach is a powerful tool to track brain responses to different hierarchical units of the speech structure.


Journal of Cognitive Neuroscience | 2012

Brain oscillations during spoken sentence processing

Marcela Peña; Lucia Melloni

Spoken sentence comprehension relies on rapid and effortless temporal integration of speech units displayed at different rates. Temporal integration refers to how chunks of information perceived at different time scales are linked together by the listener in mapping speech sounds onto meaning. The neural implementation of this integration remains unclear. This study explores the role of short and long windows of integration in accessing meaning from long samples of speech. In a cross-linguistic study, we explore the time course of oscillatory brain activity between 1 and 100 Hz, recorded using EEG, during the processing of native and foreign languages. We compare oscillatory responses in a group of Italian and Spanish native speakers while they attentively listen to Italian, Japanese, and Spanish utterances, played either forward or backward. The results show that both groups of participants display a significant increase in gamma band power (55–75 Hz) only when they listen to their native language played forward. The increase in gamma power starts around 1000 msec after the onset of the utterance and decreases by its end, resembling the time course of access to meaning during speech perception. In contrast, changes in low-frequency power show similar patterns for both native and foreign languages. We propose that gamma band power reflects a temporal binding phenomenon concerning the coordination of neural assemblies involved in accessing meaning of long samples of speech.


Brain and Language | 2015

Electrophysiological evidence of statistical learning of long-distance dependencies in 8-month-old preterm and full-term infants.

C. Kabdebon; Marcela Peña; M. Buiatti; Ghislaine Dehaene-Lambertz

Using electroencephalography, we examined 8-month-old infants ability to discover a systematic dependency between the first and third syllables of successive words, concatenated into a monotonous speech stream, and to subsequently generalize this regularity to new items presented in isolation. Full-term and preterm infants, while exposed to the stream, displayed a significant entrainment (phase-locking) to the syllabic and word frequencies, demonstrating that they were sensitive to the word unit. The acquisition of the systematic dependency defining words was confirmed by the significantly different neural responses to rule-words and part-words subsequently presented during the test phase. Finally, we observed a correlation between syllabic entrainment during learning and the difference in phase coherence between the test conditions (rule-words vs part-words) suggesting that temporal processing of the syllable unit might be crucial in linguistic learning. No group difference was observed suggesting that non-adjacent statistical computations are already robust at 8 months, even in preterm infants, and thus develop during the first year of life, earlier than expected from behavioral studies.


BMC Psychiatry | 2012

Executive attention impairment in first-episode schizophrenia

Gricel Orellana; Andrea Slachevsky; Marcela Peña

BackgroundWe compared the attention abilities of a group of first-episode schizophrenia (FES) patients and a group of healthy participants using the Attention Network Test (ANT), a standard procedure that estimates the functional state of three neural networks controlling the efficiency of three different attentional behaviors, i.e., alerting (achieving and maintaining a state of high sensitivity to incoming stimuli), orienting (ability to select information from sensory input), and executive attention (mechanisms for resolving conflict among thoughts, feelings, and actions).MethodsWe evaluated 22 FES patients from 17 to 29u2009years of age with a recent history of a single psychotic episode treated only with atypical neuroleptics, and 20 healthy persons matched with FES patients by sex, age, and educational level as the control group. Attention was estimated using the ANT in which participants indicate whether a central horizontal arrow is pointing to the left or the right. The central arrow may be preceded by spatial or temporal cues denoting where and when the arrow will appear, and may be flanked by other arrows (hereafter, flankers) pointing in the same or the opposite direction.ResultsThe efficiency of the alerting, orienting, and executive networks was estimated by measuring how reaction time was influenced by congruency between temporal, spatial, and flanker cues. We found that the control group only demonstrated significantly greater attention efficiency than FES patients in the executive attention network.ConclusionsFES patients are impaired in executive attention but not in alerting or orienting attention, suggesting that executive attention deficit may be a primary impairment during the progression of the disease.


Psychological Science | 2014

Gaze Following Is Accelerated in Healthy Preterm Infants

Marcela Peña; Diana Arias; Ghislaine Dehaene-Lambertz

Gaze following is an essential human communication cue that orients the attention of two interacting people to the same external object. This capability is robustly observed after 7 months of age in full-term infants. Do healthy preterm infants benefit from their early exposure to face-to-face interactions with other humans to acquire this capacity sooner than full-term infants of the same chronological age, despite their immature brains? In two different experiments, we demonstrated that 7-month-old preterm infants performed like 7-month-old full-term infants (with whom they shared the same chronological age) and not like 4-month-old full-term infants (with whom they shared the same postmenstrual age). The duration of exposure to visual experience thus appears to have a greater impact on the development of early gaze following than does postmenstrual age.


Psychopathology | 2016

Mother-Infant Face-to-Face Interaction: The Communicative Value of Infant-Directed Talking and Singing

Diana Arias; Marcela Peña

Background: Across culture, healthy infants show a high interest in infant-directed (ID) talking and singing. Despite ID talking and ID singing being very similar in physical properties, infants differentially respond to each of them. The mechanisms underpinning these different responses are still under discussion. Methods: This study explored the behavioral (n = 26) and brain (n = 14) responses from 6- to 8-month-old infants to ID talking and ID singing during a face-to-face mother-infant interaction with their own mother. Behavioral response was analyzed from offline video coding, and brain response was estimated from the analysis of electrophysiological recordings. Results: We found that during ID talking, infants displayed a significantly higher number of visual contacts, vocalizations, and body movements than during ID singing. Moreover, only during ID talking were the number of visual contacts and vocalizations positively correlated with the number of questions and pauses in the mothers speech. Conclusions: Our results suggest that ID talking provides infants with specific cues that allow them not only to react to mother stimulation, but also to act toward them, displaying a rudimentary version of turn-taking behavior. Brain activity partially supported that interpretation. The relevance of our results for bonding is discussed.


Frontiers in Psychology | 2016

Rhythm on Your Lips

Marcela Peña; Alan Langus; César Gutiérrez; Daniela Huepe-Artigas; Marina Nespor

The Iambic-Trochaic Law (ITL) accounts for speech rhythm, grouping of sounds as either Iambs—if alternating in duration—or Trochees—if alternating in pitch and/or intensity. The two different rhythms signal word order, one of the basic syntactic properties of language. We investigated the extent to which Iambic and Trochaic phrases could be auditorily and visually recognized, when visual stimuli engage lip reading. Our results show both rhythmic patterns were recognized from both, auditory and visual stimuli, suggesting that speech rhythm has a multimodal representation. We further explored whether participants could match Iambic and Trochaic phrases across the two modalities. We found that participants auditorily familiarized with Trochees, but not with Iambs, were more accurate in recognizing visual targets, while participants visually familiarized with Iambs, but not with Trochees, were more accurate in recognizing auditory targets. The latter results suggest an asymmetric processing of speech rhythm: in auditory domain, the changes in either pitch or intensity are better perceived and represented than changes in duration, while in the visual domain the changes in duration are better processed and represented than changes in pitch, raising important questions about domain general and specialized mechanisms for speech rhythm processing.

Collaboration


Dive into the Marcela Peña's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diana Arias

Pontifical Catholic University of Chile

View shared research outputs
Top Co-Authors

Avatar

Eugenio Rodriguez

Pontifical Catholic University of Chile

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Paula Goldin

National Scientific and Technical Research Council

View shared research outputs
Top Co-Authors

Avatar

Mariano Sigman

Torcuato di Tella University

View shared research outputs
Top Co-Authors

Avatar

Sidarta Ribeiro

Federal University of Rio Grande do Norte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexis M. Kalergis

Pontifical Catholic University of Chile

View shared research outputs
Researchain Logo
Decentralizing Knowledge