Bénédicte Poulin-Charronnat
University of Burgundy
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bénédicte Poulin-Charronnat.
Quarterly Journal of Experimental Psychology | 2005
Emmanuel Bigand; Barbara Tillmann; Bénédicte Poulin-Charronnat; D. Manderlier
Using short and long contexts, the present study investigated musical priming effects that are based on chord repetition and harmonic relatedness. A musical target (a chord) was preceded by either an identical prime or a different but harmonically related prime. In contrast to words, pictures, and environmental sounds, chord processing was not facilitated by repetition. Experiments 1 and 2 using single-chord primes showed either no significant difference between chord repetition and harmonic relatedness or facilitated processing for harmonically related targets. Experiment 3 using longer prime contexts showed that musical priming depended more on the musical function of the target in the preceding context than on target repetition. The effect of musical function was decreased, but not qualitatively changed, by chord repetition. The outcome of this study challenges predictions of sensory approaches and supports a cognitive approach of musical priming.
Journal of Cognitive Neuroscience | 2006
Bénédicte Poulin-Charronnat; Emmanuel Bigand; Stefan Koelsch
The present study investigates the effect of a change in syntactic-like musical function on event-related brain potentials (ERPs). Eight-chord piano sequences were presented to musically expert and novice listeners. Instructed to watch a movie and to ignore the musical sequences, the participants had to react when a chord was played with a different instrument than the piano. Participants were not informed that the relevant manipulation was the musical function of the last chord (target) of the sequences. The target chord acted either as a syntactically stable tonic chord (i.e., a C major chord in the key of C major) or as a less syntactically stable subdominant chord (i.e., a C major chord in the key of G major). The critical aspect of the results related to the impact such a manipulation had on the ERPs. An N5-like frontal negative component was found to be larger for subdominant than for tonic chords and attained significance only in musically expert listeners. These findings suggest that the subdominant chord is more difficult to integrate with the previous context than the tonic chord (as indexing by the observed N5) and that the processing of a small change in musical function occurs in an automatic way in musically expert listeners. The present results are discussed in relation to previous studies investigating harmonic violations with ERPs.
Quarterly Journal of Experimental Psychology | 2010
Barbara Tillmann; Bénédicte Poulin-Charronnat
Our study investigated whether newly acquired auditory structure knowledge allows listeners to develop perceptual expectations for future events. For that aim, we introduced a new experimental approach that combines implicit learning and priming paradigms. Participants were first exposed to structured tone sequences without being told about the underlying artificial grammar. They then made speeded judgements on a perceptual feature of target tones in new sequences (i.e., in-tune/out-of-tune judgements). The target tones respected or violated the structure of the artificial grammar and were thus supposed to be expected or unexpected. In this priming task, grammatical tones were processed faster and more accurately than ungrammatical ones. This processing advantage was observed for an experimental group performing a memory task during the exposure phase, but was not observed for a control group, which was lacking the exposure phase (Experiment 1). It persisted when participants realized an in-tune/out-of-tune detection task during exposure (Experiment 2). This finding suggests that the acquisition of new structure knowledge not only influences grammaticality judgements on entire sequences (as previously shown in implicit learning research), but allows developing perceptual expectations that influence single event processing. It further promotes the priming paradigm as an implicit access to acquired artificial structure knowledge.
Frontiers in Psychology | 2011
Lisianne Hoch; Bénédicte Poulin-Charronnat; Barbara Tillmann
Recent research has suggested that music and language processing share neural resources, leading to new hypotheses about interference in the simultaneous processing of these two structures. The present study investigated the effect of a musical chords tonal function on syntactic processing (Experiment 1) and semantic processing (Experiment 2) using a cross-modal paradigm and controlling for acoustic differences. Participants read sentences and performed a lexical decision task on the last word, which was, syntactically or semantically, expected or unexpected. The simultaneously presented (task-irrelevant) musical sequences ended on either an expected tonic or a less-expected subdominant chord. Experiment 1 revealed interactive effects between music-syntactic and linguistic-syntactic processing. Experiment 2 showed only main effects of both music-syntactic and linguistic-semantic expectations. An additional analysis over the two experiments revealed that linguistic violations interacted with musical violations, though not differently as a function of the type of linguistic violations. The present findings were discussed in light of currently available data on the processing of music as well as of syntax and semantics in language, leading to the hypothesis that resources might be shared for structural integration processes and sequencing.
Psychonomic Bulletin & Review | 2013
Pierre Perruchet; Bénédicte Poulin-Charronnat
A theoretical landmark in the growing literature comparing language and music is the shared syntactic integration resource hypothesis (SSIRH; e.g., Patel, 2008), which posits that the successful processing of linguistic and musical materials relies, at least partially, on the mastery of a common syntactic processor. Supporting the SSIRH, Slevc, Rosenberg, and Patel (Psychonomic Bulletin & Review 16(2):374–381, 2009) recently reported data showing enhanced syntactic garden path effects when the sentences were paired with syntactically unexpected chords, whereas the musical manipulation had no reliable effect on the processing of semantic violations. The present experiment replicated Slevc et al.’s (2009) procedure, except that syntactic garden paths were replaced with semantic garden paths. We observed the very same interactive pattern of results. These findings suggest that the element underpinning interactions is the garden path configuration, rather than the implication of an alleged syntactic module. We suggest that a different amount of attentional resources is recruited to process each type of linguistic manipulations, hence modulating the resources left available for the processing of music and, consequently, the effects of musical violations.
Frontiers in Systems Neuroscience | 2014
Emmanuel Bigand; Charles Delbé; Bénédicte Poulin-Charronnat; Marc Leman; Barbara Tillmann
During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.
Acta Psychologica | 2014
Pierre Perruchet; Bénédicte Poulin-Charronnat; Barbara Tillmann; Ronald Peereman
There is large evidence that infants are able to exploit statistical cues to discover the words of their language. However, how they proceed to do so is the object of enduring debates. The prevalent position is that words are extracted from the prior computation of statistics, in particular the transitional probabilities between syllables. As an alternative, chunk-based models posit that the sensitivity to statistics results from other processes, whereby many potential chunks are considered as candidate words, then selected as a function of their relevance. These two classes of models have proven to be difficult to dissociate. We propose here a procedure, which leads to contrasted predictions regarding the influence of a first language, L1, on the segmentation of a second language, L2. Simulations run with PARSER (Perruchet & Vinter, 1998), a chunk-based model, predict that when the words of L1 become word-external transitions of L2, learning of L2 should be depleted until reaching below chance level, at least before extensive exposure to L2 reverses the effect. In the same condition, a transitional-probability based model predicts above-chance performance whatever the duration of exposure to L2. PARSERs predictions were confirmed by experimental data: Performance on a two-alternative forced choice test between words and part-words from L2 was significantly below chance even though part-words were less cohesive in terms of transitional probabilities than words.
Trends in Cognitive Sciences | 2006
Emmanuel Bigand; Barbara Tillmann; Bénédicte Poulin-Charronnat
Music and language have rules governing the structural organization of events. By analogy to language, these rules are referred to as the ‘syntactic rules’ of music. Does this analogy imply that the brain actually performs syntactic computations on musical structures, similar to those for language and based on a specialized module [1–3]? In contrast to linguistic syntax, which involves abstract computation between words, rules governing musical syntax are rooted in psychoacoustic properties of sound: syntactically related events are related on a sensory level and involve only weak acoustical deviance.
Experimental Psychology | 2013
Laurent Grégoire; Pierre Perruchet; Bénédicte Poulin-Charronnat
The usual color-word Stroop task, as well as most other Stroop-like paradigms, has provided invaluable information on the automaticity of word reading. However, investigating automaticity through reading alone has inherent limitations. This study explored whether a Stroop-like effect could be obtained by replacing word reading with note naming in musicians. Note naming shares with word reading the crucial advantage of being intensively practiced over years by musicians, hence allowing to investigate levels of automatism that are out of reach of laboratory settings. But the situation provides much greater flexibility in manipulating practice. For instance, even though training in musical notation is often conducted in parallel with the acquisition of literacy skills during childhood, many exceptions make that it can be easily decoupled from age. Supporting the possibility of exploiting note naming as a new tool for investigating automatisms, musicians asked to process note names written inside note pictures in incongruent positions on a staff were significantly slowed down in both a go/no-go task (Experiment 1) and a verbal task (Experiment 2) with regard to a condition in which note names were printed inside note pictures in congruent positions.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2009
Charlotte Desmet; Bénédicte Poulin-Charronnat; Philippe Lalitte; Pierre Perruchet
In a recent study, G. Kuhn and Z. Dienes (2005) reported that participants previously exposed to a set of musical tunes generated by a biconditional grammar subsequently preferred new tunes that respected the grammar over new ungrammatical tunes. Because the study and test tunes did not share any chunks of adjacent intervals, this result may be construed as straightforward evidence for the implicit learning of a structure that was only governed by nonlocal dependency rules. It is shown here that the grammar modified the statistical distribution of perceptually salient musical events, such as the probability that tunes covered an entire octave. When the influence of these confounds was removed, the effect of grammaticality disappeared.