Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cyrille Magne is active.

Publication


Featured researches published by Cyrille Magne.


Journal of Cognitive Neuroscience | 2006

Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches

Cyrille Magne; Daniele Schön; Mireille Besson

The idea that extensive musical training can influence processing in cognitive domains other than music has received considerable attention from the educational system and the media. Here we analyzed behavioral data and recorded event-related brain potentials (ERPs) from 8-year-old children to test the hypothesis that musical training facilitates pitch processing not only in music but also in language. We used a parametric manipulation of pitch so that the final notes or words of musical phrases or sentences were congruous, weakly incongruous, or strongly incongruous. Musician children outperformed nonmusician children in the detection of the weak incongruity in both music and language. Moreover, the greatest differences in the ERPs of musician and nonmusician children were also found for the weak incongruity: whereas for musician children, early negative components developed in music and late positive components in language, no such components were found for nonmusician children. Finally, comparison of these results with previous ones from adults suggests that some aspects of pitch processing are in effect earlier in music than in language. Thus, the present results reveal positive transfer effects between cognitive domains and shed light on the time course and neural basis of the development of prosodic and melodic processing.


Frontiers in Psychology | 2011

EEG Correlates of Song Prosody: A New Look at the Relationship between Linguistic and Musical Rhythm.

Reyna L. Gordon; Cyrille Magne; Edward W. Large

Song composers incorporate linguistic prosody into their music when setting words to melody, a process called “textsetting.” Composers tend to align the expected stress of the lyrics with strong metrical positions in the music. The present study was designed to explore the idea that temporal alignment helps listeners to better understand song lyrics by directing listeners’ attention to instances where strong syllables occur on strong beats. Three types of textsettings were created by aligning metronome clicks with all, some or none of the strong syllables in sung sentences. Electroencephalographic recordings were taken while participants listened to the sung sentences (primes) and performed a lexical decision task on subsequent words and pseudowords (targets, presented visually). Comparison of misaligned and well-aligned sentences showed that temporal alignment between strong/weak syllables and strong/weak musical beats were associated with modulations of induced beta and evoked gamma power, which have been shown to fluctuate with rhythmic expectancies. Furthermore, targets that followed well-aligned primes elicited greater induced alpha and beta activity, and better lexical decision task performance, compared with targets that followed misaligned and varied sentences. Overall, these findings suggest that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics by synchronizing neural activity with strong syllables. This approach may begin to explain the mechanisms underlying the relationship between linguistic and musical rhythm in songs, and how rhythmic attending facilitates learning and recall of song lyrics. Moreover, the observations reported here coincide with a growing number of studies reporting interactions between the linguistic and musical dimensions of song, which likely stem from shared neural resources for processing music and speech.


PLOS ONE | 2010

Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence

Reyna L. Gordon; Daniele Schön; Cyrille Magne; Corine Astésano; Mireille Besson

Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.


Brain and Language | 2016

Speech rhythm sensitivity and musical aptitude: ERPs and individual differences

Cyrille Magne; Deanna K. Jordan; Reyna L. Gordon

This study investigated the electrophysiological markers of rhythmic expectancy during speech perception. In addition, given the large literature showing overlaps between cognitive and neural resources recruited for language and music, we considered a relation between musical aptitude and individual differences in speech rhythm sensitivity. Twenty adults were administered a standardized assessment of musical aptitude, and EEG was recorded as participants listened to sequences of four bisyllabic words for which the stress pattern of the final word either matched or mismatched the stress pattern of the preceding words. Words with unexpected stress patterns elicited an increased fronto-central mid-latency negativity. In addition, rhythm aptitude significantly correlated with the size of the negative effect elicited by unexpected iambic words, the least common type of stress pattern in English. The present results suggest shared neurocognitive resources for speech rhythm and musical rhythm.


Eurasip Journal on Audio, Speech, and Music Processing | 2006

Electrophysiological Study of Algorithmically Processed Metric/Rhythmic Variations in Language and Music

Sølvi Ystad; Cyrille Magne; Snorre Farner; Grégory Pallone; Mitsuko Aramaki; Mireille Besson; Richard Kronland-Martinet

This work is the result of an interdisciplinary collaboration between scientists from the fields of audio signal processing, phonetics and cognitive neuroscience aiming at studying the perception of modifications in meter, rhythm, semantics and harmony in language and music. A special time-stretching algorithm was developed to work with natural speech. In the language part, French sentences ending with tri-syllabic congruous or incongruous words, metrically modified or not, were made. In the music part, short melodies made of triplets, rhythmically and/or harmonically modified, were built. These stimuli were presented to a group of listeners that were asked to focus their attention either on meter/rhythm or semantics/harmony and to judge whether or not the sentences/melodies were acceptable. Language ERP analyses indicate that semantically incongruous words are processed independently of the subjects attention thus arguing for automatic semantic processing. In addition, metric incongruities seem to influence semantic processing. Music ERP analyses show that rhythmic incongruities are processed independently of attention, revealing automatic processing of rhythm in music.


Advances in psychology | 2008

A Dynamical Framework for Human Skill Learning

Cyrille Magne; J. A. Scott Kelso

Publisher Summary This chapter presents an outline of the theory of learning derived from coordination dynamics. A key concept of coordination dynamics is that each individual enters the learning environment not as a blank slate but with preferences and preexisting capabilities. In the language of coordination dynamics, such predispositions and susceptibilities are referred to as intrinsic dynamics. This concept does not refer necessarily to innate mechanisms, but rather to the set of capacities that exist at the time the new task is to be learned. The constraints imposed by the learning environment, the task to be learned, the learners intention, and so on constitute a source of behavioral or functional information. Functional information and intrinsic dynamics are complementary aspects of coordination dynamics. The chapter reviews some generic principles derived from empirical work at the behavioral level and explores how these findings are supported at the level of brain structure and function.


Frontiers in Psychology | 2016

Editorial: Overlap of Neural Systems for Processing Language and Music.

McNeel G. Jantzen; Edward W. Large; Cyrille Magne

The relationship between musical training and speech perception has intrigued researchers in language and music for decades, from Bever and Chiarellos (1974) work emphasizing hemispheric specialization to Tallal and Gaabs (2006) findings of shared neural circuitry. Recent studies demonstrating neural overlap for processing speech and music, and enhanced speech perception and production in musicians, suggest that these regions may be inextricably intertwined (Sammler et al., 2007; Wong P.C. et al., 2007; Wong P. et al., 2007; Rogalsky et al., 2011; Schulze et al., 2011). Patels OPERA hypothesis and Hickok and Poeppels (2000, 2007) neuroanatomical models continue to evolve and guide this field of research. However, the extent of neural overlap between music and speech remains hotly debated (Norman-Haignere et al., 2015; Peretz et al., 2015), with surprisingly little empirical research exploring specific neural homologs and analogs. Emerging evidence suggests that shared processes likely exist throughout development, depend upon an individuals acoustic experiences, and are affected by developmental trajectories. Moreover, developing theories that address the neural and developmental interaction between music and language processing in conjunction with the broad availability of sophisticated tools for quantifying brain activity and dynamics offer the perfect opportunity for researchers to address these key empirical questions. Taken together, this field of research has begun to elucidate the complex dynamics of overlapping neural areas for processing language and music. This special issue highlights the development of this overlap in early childhood and explores how the interaction between language and musical training enhances cognitive functioning in adults. This E-Book comprises 10 opinion, perspective, and research papers that focus on the overlap of neural systems for processing language and music. Eight of these papers report original research and new findings that support overlapping neural systems for processing language and music. LaCroix et al. performed a meta-analysis of 171 neuroimaging studies to examine the role of context in processing music and language. Their findings suggest that observed neural overlaps for speech and music might be task-dependent. Fogel et al. developed a novel method for studying and quantifying predictions in musical tasks that is consistent with language tasks. Their melodic cloze probability task can be used to test computational models of melodic expectation and allows for a more precise examination of the relationship between predictive mechanisms in music and language. Using a garden-path design, Jung et al. demonstrated that rhythmic expectancy is crucial to the interaction of processing musical and linguistic syntax. Additionally, their findings support the incorporation of dynamic models of attentional entrainment into existing theories of musical and linguistic syntactical processing. Margulis et al. used the speech-to-song illusion to examine the role of pronunciation difficulty and temporal regularity. Their finding—that difficult to pronounce languages, not differing temporal intervals, elicited a stronger speech-to-song illusion—suggests a stronger speech representation for native and easy to pronounce languages. Miles et al. demonstrated that females have an advantage for recognizing familiar musical melodies. They believe this advantage is related to superior declarative memory, which may underlie the storage and knowledge of both the mental lexicon in language (e.g., Ullman, 2001) and some aspects of familiar melodies in music (Miranda and Ullman, 2007). Two papers report finding that musical training during development enhances literacy skills, including phonological awareness and reading fluency, via neural mechanisms for both language and music (Dege et al.; Gordon et al.). Moreover, Dege and colleagues provide evidence that music production and music perception are associated with multiple precursors of reading. Finally, Lolli et al. examined the effect of sound frequency on judgments of emotion in speech by congenital amusics. Using both high and low-pass filtered speech in a pitch discrimination and emotion identification task, their findings demonstrate the important role of low frequency information in identifying the emotional content of speech. In addition to these eight research papers there are two perspective and opinion papers that emphasize the affective and emotive commonalities between music and language (Lehmann and Paquette; Omigie). Lehmann and Paquette provide a neurobehavioral approach for examining cross-domain processing of musical and vocal emotions, suggesting that studying cochlear implant users may allow for a richer understanding of neural overlap between music and language. Omigie (2015) provides evolutionary evidence for shared underlying neural mechanisms for our emotive responses to music and literature. This E-Book provides a comprehensive snapshot of the research examining the complex overlap of neural systems for processing language and music. Both musical experience and training enhance the development of linguistic representations, emotion perception, and other cognitive skills. Furthermore, the research presented here contributes to current knowledge of neuroplastic reorganization and repair in clinical populations, and may aid in the design of new and more effective rehabilitative protocols.


Restorative Neurology and Neuroscience | 2007

Influence of musical expertise and musical training on pitch processing in music and language

Mireille Besson; Daniele Schön; Sylvain Moreno; Andreia Santos; Cyrille Magne


Journal of Cognitive Neuroscience | 2011

Musicians and the metric structure of words

Céline Marie; Cyrille Magne; Mireille Besson


NeuroImage | 2010

Similar cerebral networks in language, music and song perception

Daniele Schön; Reyna L. Gordon; Aurélie Campagne; Cyrille Magne; Corine Astésano; Jean-Luc Anton; Mireille Besson

Collaboration


Dive into the Cyrille Magne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniele Schön

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Edward W. Large

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

McNeel G. Jantzen

Western Washington University

View shared research outputs
Top Co-Authors

Avatar

Snorre Farner

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Deanna K. Jordan

Middle Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Kelly J. Jantzen

Western Washington University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge