Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniele Schön is active.

Publication


Featured researches published by Daniele Schön.


Journal of Cognitive Neuroscience | 2006

Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches

Cyrille Magne; Daniele Schön; Mireille Besson

The idea that extensive musical training can influence processing in cognitive domains other than music has received considerable attention from the educational system and the media. Here we analyzed behavioral data and recorded event-related brain potentials (ERPs) from 8-year-old children to test the hypothesis that musical training facilitates pitch processing not only in music but also in language. We used a parametric manipulation of pitch so that the final notes or words of musical phrases or sentences were congruous, weakly incongruous, or strongly incongruous. Musician children outperformed nonmusician children in the detection of the weak incongruity in both music and language. Moreover, the greatest differences in the ERPs of musician and nonmusician children were also found for the weak incongruity: whereas for musician children, early negative components developed in music and late positive components in language, no such components were found for nonmusician children. Finally, comparison of these results with previous ones from adults suggests that some aspects of pitch processing are in effect earlier in music than in language. Thus, the present results reveal positive transfer effects between cognitive domains and shed light on the time course and neural basis of the development of prosodic and melodic processing.


Cognition | 2008

Songs as an aid for language acquisition

Daniele Schön; Maud Boyer; Sylvain Moreno; Mireille Besson; Isabelle Peretz; Régine Kolinsky

In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.


Human Brain Mapping | 2007

Single-trial analysis of oddball event-related potentials in simultaneous EEG-fMRI

Christian G. Bénar; Daniele Schön; Stephan Grimault; Bruno Nazarian; Boris Burle; Muriel Roth; Jean-Michel Badier; Patrick Marquis; Catherine Liégeois-Chauvel; Jean-Luc Anton

There has recently been a growing interest in the use of simultaneous electroencephalography (EEG) and functional MRI (fMRI) for evoked activity in cognitive paradigms, thereby obtaining functional datasets with both high spatial and temporal resolution. The simultaneous recording permits obtaining event‐related potentials (ERPs) and MR images in the same environment, conditions of stimulation, and subject state; it also enables tracing the joint fluctuations of EEG and fMRI signals. The goal of this study was to investigate the possibility of tracking the trial‐to‐trial changes in event‐related EEG activity, and of using this information as a parameter in fMRI analysis. We used an auditory oddball paradigm and obtained single‐trial amplitude and latency features from the EEG acquired during fMRI scanning. The single‐trial P300 latency presented significant correlation with parameters external to the EEG (target‐to‐target interval and reaction time). Moreover, we obtained significant fMRI activations for the modulation by P300 amplitude and latency, both at the single‐subject and at the group level. Our results indicate that, in line with other studies, the EEG can bring a new dimension to the field of fMRI analysis by providing fine temporal information on the fluctuations in brain activity. Hum Brain Mapp, 2007.


Cerebral Cortex | 2013

Music Training for the Development of Speech Segmentation

Clément François; Julie Chobert; Mireille Besson; Daniele Schön

The role of music training in fostering brain plasticity and developing high cognitive skills, notably linguistic abilities, is of great interest from both a scientific and a societal perspective. Here, we report results of a longitudinal study over 2 years using both behavioral and electrophysiological measures and a test-training-retest procedure to examine the influence of music training on speech segmentation in 8-year-old children. Children were pseudo-randomly assigned to either music or painting training and were tested on their ability to extract meaningless words from a continuous flow of nonsense syllables. While no between-group differences were found before training, both behavioral and electrophysiological measures showed improved speech segmentation skills across testing sessions for the music group only. These results show that music training directly causes facilitation in speech segmentation, thereby pointing to the importance of music for speech perception and more generally for childrens language development. Finally these results have strong implications for promoting the development of music-based remediation strategies for children with language-based learning impairments.


Neuroreport | 2005

Brain regions involved in the recognition of happiness and sadness in music

Stéphanie Khalfa; Daniele Schön; Jean-Luc Anton; Catherine Liégeois-Chauvel

Here, we used functional magnetic resonance imaging to test for the lateralization of the brain regions specifically involved in the recognition of negatively and positively valenced musical emotions. The manipulation of two major musical features (mode and tempo), resulting in the variation of emotional perception along the happiness–sadness axis, was shown to principally involve subcortical and neocortical brain structures, which are known to intervene in emotion processing in other modalities. In particular, the minor mode (sad excerpts) involved the left orbito and mid-dorsolateral frontal cortex, which does not confirm the valence lateralization model. We also show that the recognition of emotions elicited by variations of the two perceptual determinants rely on both common (BA 9) and distinct neural mechanisms.


Cerebral Cortex | 2011

Musical Expertise Boosts Implicit Learning of Both Musical and Linguistic Structures

Clément François; Daniele Schön

Musical training is known to modify auditory perception and related cortical organization. Here, we show that these modifications may extend to higher cognitive functions and generalize to processing of speech. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and nonlinguistic stimuli based only on probabilities of occurrence between adjacent syllables or tones. In the present experiment, we used an artificial (sung) language learning design coupled with an electrophysiological approach. While behavioral results were not clear cut in showing an effect of expertise, Event-Related Potentials data showed that musicians learned better than did nonmusicians both musical and linguistic structures of the sung language. We discuss these findings in terms of practice-related changes in auditory processing, stream segmentation, and memory processes.


Journal of Cognitive Neuroscience | 2011

Enhanced passive and active processing of syllables in musician children

Julie Chobert; Céline Marie; Clément François; Daniele Schön; Mireille Besson

The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency, vowel duration, and VOT. Both the passive and the active processing of duration and VOT deviants were enhanced in musician compared with nonmusician children. Moreover, although no effect was found on the passive processing of frequency, active frequency discrimination was enhanced in musician children. These findings are discussed in terms of common processing of acoustic features in music and speech and of positive transfer of training from music to the more abstract phonological representations of speech units (syllables).


Neuropsychologia | 2012

Rhythmic priming enhances the phonological processing of speech

Nia Cason; Daniele Schön

While natural speech does not possess the same degree of temporal regularity found in music, there is recent evidence to suggest that temporal regularity enhances speech processing. The aim of this experiment was to examine whether speech processing would be enhanced by the prior presentation of a rhythmical prime. We recorded electrophysiological (EEG) and behavioural (reaction time) data while participants listened to nonsense words preceded by a simple rhythm. Results showed that speech processing was enhanced by the temporal expectations generated by the prime. Interestingly, beat and metrical structure of the prime had an effect on different ERP components elicited by the following word (N100, P300). These results indicate that using a musical-like rhythmic prime matched to the prosodic features of speech enhances phonological processing of spoken words and thus reveal a cross-domain effect of musical rhythm on the processing of speech rhythm.


Frontiers in Psychology | 2011

Musical Expertise and Statistical Learning of Musical and Linguistic Structures

Daniele Schön; Clément François

Adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. Here we present a review of a series of electrophysiological studies investigating (1) Speech segmentation resulting from exposure to spoken and sung sequences (2) The extraction of linguistic versus musical information from a sung sequence (3) Differences between musicians and non-musicians in both linguistic and musical dimensions. The results show that segmentation is better after exposure to sung compared to spoken material and moreover, that linguistic structure is better learned than the musical structure when using sung material. In addition, musical expertise facilitates the learning of both linguistic and musical structures. Finally, an electrophysiological approach, which directly measures brain activity, appears to be more sensitive than a behavioral one.


Acta Psychologica | 2013

Rhythm implicitly affects temporal orienting of attention across modalities

Deirdre Bolger; Wiebke Trost; Daniele Schön

Here we present two experiments investigating the implicit orienting of attention over time by entrainment to an auditory rhythmic stimulus. In the first experiment, participants carried out a detection and discrimination tasks with auditory and visual targets while listening to an isochronous, auditory sequence, which acted as the entraining stimulus. For the second experiment, we used musical extracts as entraining stimulus, and tested the resulting strength of entrainment with a visual discrimination task. Both experiments used reaction times as a dependent variable. By manipulating the appearance of targets across four selected metrical positions of the auditory entraining stimulus we were able to observe how entraining to a rhythm modulates behavioural responses. That our results were independent of modality gives a new insight into cross-modal interactions between auditory and visual modalities in the context of dynamic attending to auditory temporal structure.

Collaboration


Dive into the Daniele Schön's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Luc Anton

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muriel Roth

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cyrille Magne

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge