Charles Delbé
University of Burgundy
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles Delbé.
Frontiers in Systems Neuroscience | 2014
Emmanuel Bigand; Charles Delbé; Bénédicte Poulin-Charronnat; Marc Leman; Barbara Tillmann
During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.
PLOS ONE | 2011
Emmanuel Bigand; Charles Delbé; Yannick Gérard; Barbara Tillmann
The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes.
Cortex | 2014
Catherine Liégeois-Chauvel; Christian Bénar; Julien Krieg; Charles Delbé; Patrick Chauvel; Bernard Giusiano; Emmanuel Bigand
Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling.
Musicae Scientiae | 2009
Freya Bailes; Charles Delbé
Abstract The report provides a brief account of an experiment whose control conditions produced interestingly counter-intuitive results. The method adapted priming techniques to explore whether imagining well-known melodies would facilitate perceptual discrimination of congruent compared to incongruent melodic continuations in a syllable identification task. This was shown to be the case, but in a subsequent control experiment, imagining an irrelevant lure melody also showed a priming effect. The persistent priming effect apparently related the target sequence to the aurally presented, nonadjacent opening notes, and not to the intervening mental image. A number of statistical analyses of the pitch relationships in match and mismatch targets were performed and a further experiment is reported in which participants explicitly selected between match and mismatch versions of the stimuli for fit within the prime context. It seems that the pitch proximity of the first target note to the final note of the sounded prime may be responsible for the priming effect. An outline of further research to explain the phenomenon is suggested, including experiments to test the strength of melodic priming governed by pitch proximity, by systematically varying the length of the period between prime and target.
Annee Psychologique | 2008
Charles Delbé; Robert M. French; Emmanuel Bigand
Une asymetrie inattendue dans une tâche d’apprentissage de categories visuelles chez de jeunes enfants a ete observee par Quinn, Eimas et Rosenkrantz (1993). Une serie de resultats experimentaux et de simulations ont montre que cette asymetrie est due a un phenomene d’inclusion perceptive de la categorie visuelle «chat» a l’interieur de la categorie «chien», qui trouve son origine dans la plus grande variabilite des distributions des attributs visuels de la categorie des chiens par rapport a celles des chats (Mareschal & French, 1997; Mareschal, French, & Quinn, 2000; French, Mermillod, Quinn, & Mareschal, 2001; French, Mareschal, Mermillod, & Quinn, 2004). Dans la presente etude, nous avons cherche a savoir si ce phenomene de categorisation asymetrique pouvait etre replique dans le domaine auditif. Nous avons donc mis au point une serie de stimuli auditifs sequentiels, analogues aux stimuli visuels de Quinn et al. Deux experiences portant sur des auditeurs adultes semblent demontrer la presence d’un effet de categorisation asymetrique comparable dans la modalite auditive. De plus, des simulations connexionnistes confirment que des processus ascendants perceptifs, sont largement a l’origine de nos resultats comportementaux.
Archive | 2010
Emmanuel Bigand; Charles Delbé
The dissociation between implicit and explicit cognition has a long history in psychology. As early as 1920, Clark Hull (25) investigated the learning of Chinese ideographs and identified the process of concept formation by abstraction of common elements, a process that occurs without explicit knowledge from the subjects of these regularities. Perceptual learning is another example of those processes that take place largely in the absence of awareness of the rules that govern the stimulations of the environment. Helmholtz (24) was one of the first to refer to implicit inference made by the perceptual system and to perceptual learning. Some years later, the distinction between implicit and explicit cognition contributed to mark the end of the behaviourism psychology. At this time, Tolman (74) reported an experiment that was difficult to account for in the framework of conditioning theories of Skinner (69). In this experiment, rats were put in a complex labyrinth and had to learn to get food at the exit. Not surprisingly, the rats receiving positive reinforcement learned faster than a control group of rats that never received food at the exit. The interesting new point of Tolman’s study was to define a third group of rats, for which no food was available at the exit during the first part of the experiment. According to the behaviourist school, this group was not supposed to learn anything and was actually shown to behave exactly as the control group. In the second part of the experiment, this third group started to receive food at the exit. It was expected that learning would begin with this trial, and that rats of the third group would start improving their performance in the same way as rats of the first experimental.
Proceedings of the Tenth Neural Computation and Psychology Workshop | 2008
Charles Delbé
In this study, a psychoacoustical and connectionist modeling framework is proposed for the investigation of musical cognition. It is suggested that music perception involves the manipulation of 1) sensory representations that have correlations with psychoacoustical features of the stimulus, and 2) abstract representations of the statistical regularities underlying a particular musical syntax. In the implicit learning domain, sensory features have been shown to interact with the processes involved in the extraction of the regularities governing musical events combinations in a stream [e.g., 1]. Furthermore, in a more ecological context, it is well known that traditional Western tonal system has sought a great convergence between sensory and syntactic factors. The present research aims at investigating the effects of the sensory coding simulated by an auditory model of pitch perception [2] on the representations of the sequential regularities developed in a recurrent connectionist model. According to Arbib, the brain can be described as “layered, somatotopic, distributed computer”. The auditory cortex provides an excellent example of a somatotopic processing array, as it shows tonotopic (pitch-dependent) organization in multiple processing stages. In an effort to model the somatotopic maps found in the cerebral cortex, Kohonen[3] developed the Self-Organizing Map (SOM). Although it has produced very good results with static inputs, it is often pointed out in the literature [4,5] that the standard SOM is not designed for time-domain processing. Yet, music, like language, is a highly structured domain, in which a set of principles governs the combination of discrete structural elements into sequences. These combinatorial principles can be observed at multiple levels, such as the formation of chords, chord progressions and keys, and are at the origin of various temporal dependencies between elements. Extensions of the SOM that learn temporal dynamics have been proposed [4,5]. These models contain a useful idea: recurrent, temporal feedback in addition to the purely spatial recurrent excitation/inhibition used in the conventional SOM. Each incoming signal is thus associated with a contextual signal which reflects the current state of the map. Hence, these models show the ability to maintain state and memory based on past input while engaging in self-organizing learning and context recognition, making them strong candidates for modeling processes involved in music cognition. Through a series of simulations, I will first investigate this type of connectionist network as a model of human sequential learning. I will then show that sensory signals shape the representations of sequences developed by these recurrent models. The strength with which sensory signals and contextual signals interact during learning determines the type of topology realized in the topographic maps (i.e., spatially or temporally defined signal topology). More specifically, the interactions between bottom-up representations built by an auditory model of pitch perception and an emergent syntactic-like knowledge in recurrent self-organizing networks working on these sensory signals can give an account of some experimental results in music cognition literature. Furthermore, the general learning dynamic of the maps could explain the developmental interactions of sensory and syntactical characteristics of the musical environment. When units become more and more specialized in the time-domain, sensory effects tends to become weaker, compared to syntactical effects.
Journal of Experimental Psychology: Human Perception and Performance | 2010
Frederic Marmel; Barbara Tillmann; Charles Delbé
Music Perception: An Interdisciplinary Journal | 2008
Stééphanie Khalfa; Charles Delbé; Emmanuel Bigand; Emmanuelle Reynaud; Patrick Chauvel; Catherine Liéégeois-Chauvel
Proceedings of the Annual Meeting of the Cognitive Science Society | 2006
Emmanuel Bigand; Charles Delbé; Robert M. French