Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Julien Aucouturier is active.

Publication


Featured researches published by Jean-Julien Aucouturier.


Journal of New Music Research | 2003

Representing Musical Genre: A State of the Art

Jean-Julien Aucouturier; François Pachet

Musical genre is probably the most popular music descriptor. In the context of large musical databases and Electronic Music Distribution, genre is therefore a crucial metadata for the description of music content. However, genre is intrinsically ill-defined and attempts at defining genre precisely have a strong tendency to end up in circular, ungrounded projections of fantasies. Is genre an intrinsic attribute of music titles, as, say, tempo? Or is genre a extrinsic description of the whole piece? In this article, we discuss the various approaches in representing musical genre, and propose to classify these approaches in three main categories: manual, prescriptive and emergent approaches. We discuss the pros and cons of each approach, and illustrate our study with results of the Cuidado IST project.


international conference on multimedia and expo | 2002

Scaling up music playlist generation

Jean-Julien Aucouturier; François Pachet

The issue of generating automatically sequences of music titles that satisfy arbitrary criteria such as user preferences has gained interest recently, because of the numerous applications in the field of electronic music distribution. All the approaches proposed so far suffer from two main drawbacks: reduced expressiveness and incapacity to handle large music catalogues. We present in this paper a system that is able to produce automatically music playlists out of large, real catalogues (up to 200000 titles), and that can handle arbitrarily complex criteria. We describe the basic algorithm and its adaptation to playlist generation, and report on experiments performed in the context of the European project Cuidado.


international conference on neural information processing | 2008

Making a Robot Dance to Music Using Chaotic Itinerancy in a Network of FitzHugh-Nagumo Neurons

Jean-Julien Aucouturier; Yuta Ogai; Takashi Ikegami

We propose a technique to make a robot execute free and solitary dance movements on music, in a manner which simulates the dynamic alternations between synchronisation and autonomy typically observed in human behaviour. In contrast with previous approaches, we preprogram neither the dance patterns nor their alternation, but rather build in basic dynamics in the robot, and let the behaviour emerge in a seemingly autonomous manner. The robot motor commands are generated in real-time by converting the output of a neural network processing a sequence of pulses corresponding to the beats of the music being danced to. The spiking behaviour of individual neurons is controlled by a biologically-inspired model (FitzHugh-Nagumo). Under appropriate parameters, the network generates chaotic itinerant behaviour among low-dimensional local attractors. A robot controlled this way exhibits a variety of motion styles, some being periodic and strongly coupled to the musical rhythm and others being more independent, as well as spontaneous jumps from one style of motion to the next. The resulting behaviour is completely deterministic (as the solution of a non-linear dynamical system), adaptive to the music being played, and believed to be an interesting compromise between synchronisation and autonomy.


Journal of New Music Research | 2008

Introduction–From Genres to Tags: A Little Epistemology of Music Information Retrieval Research

Jean-Julien Aucouturier; Elias Pampalk

Surprise: This didn’t quite happen as expected. Most of the information is annotated manually (no automated analysis), unstructured (no taxonomy), in a collaborative, dynamical and unmoderated process (unlike a centralized library). Millions of users routinely connect to web-sites such as last.fm, MusicStrands, MusicBrainz or Pandora, where they enter free descriptions (aka tags) of the music they like or dislike. Each user’s tags are available for all to see and influence the way other users describe or look for music. The result is a collaborative repository of musical knowledge of a size and richness unheard of so far. ‘‘The Beatles’’ used to be ‘‘British pop’’. What they are now is something akin to Figure 1.


Frontiers in Human Neuroscience | 2013

Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study.

Laura Ferreri; Jean-Julien Aucouturier; Makii Muthalib; Emmanuel Bigand; Aurélia Bugaiska

Listening to music engages the whole brain, thus stimulating cognitive performance in a range of non-purely musical activities such as language and memory tasks. This article addresses an ongoing debate on the link between music and memory for words. While evidence on healthy and clinical populations suggests that music listening can improve verbal memory in a variety of situations, it is still unclear what specific memory process is affected and how. This study was designed to explore the hypothesis that music specifically benefits the encoding part of verbal memory tasks, by providing a richer context for encoding and therefore less demand on the dorsolateral prefrontal cortex (DLPFC). Twenty-two healthy young adults were subjected to functional near-infrared spectroscopy (fNIRS) imaging of their bilateral DLPFC while encoding words in the presence of either a music or a silent background. Behavioral data confirmed the facilitating effect of music background during encoding on subsequent item recognition. fNIRS results revealed significantly greater activation of the left hemisphere during encoding (in line with the HERA model of memory lateralization) and a sustained, bilateral decrease of activity in the DLPFC in the music condition compared to silence. These findings suggest that music modulates the role played by the DLPFC during verbal encoding, and open perspectives for applications to clinical populations with prefrontal impairments, such as elderly adults or Alzheimer’s patients.


Multimedia Tools and Applications | 2006

The Cuidado music browser: an end-to-end electronic music distribution system

François Pachet; Jean-Julien Aucouturier; Amaury La Burthe; Aymeric Zils; Anthony Beurivé

The IST project Cuidado, which ran from January 2001 to December 2003, produced the first entirely automatic chain for extracting and exploiting musical metadata for browsing music. The Sony CSL laboratory is primarily interested in the context of popular music browsing in large-scale catalogues. First, we are interested in human-centred issues related to browsing “Popular Music.” Popular here means that the music accessed to is widely distributed, and known to many listeners. Second, we consider “popular browsing” of music, i.e., making music accessible to non-specialists (music lovers), and allowing sharing of musical tastes and information within communities, departing from the usual, single user view of digital libraries. This research project covers all areas of the music-to-listener chain, from music description—descriptor extraction from the music signal, or data mining techniques—similarity based access and novel music retrieval methods such as automatic sequence generation, and user interface issues. This paper describes the scientific and technical issues at stake, and the results obtained.


intelligent information systems | 2013

Seven problems that keep MIR from attracting the interest of cognition and neuroscience

Jean-Julien Aucouturier; Emmanuel Bigand

Despite one and a half decade of research and an impressive body of knowledge on how to represent and process musical audio signals, the discipline of Music Information Retrieval still does not enjoy broad recognition outside of computer science. In music cognition and neuroscience in particular, where MIR’s contribution could be most needed, MIR technologies are scarcely ever utilized—when they’re not simply brushed aside as irrelevant. This, we contend here, is the result of a series of misunderstandings between the two fields, about deeply different methodologies and assumptions that are not often made explicit. A collaboration between a MIR researcher and a music psychologist, this article attempts to clarify some of these assumptions, and offers some suggestions on how to adapt some of MIR’s most emblematic signal processing paradigms, evaluation procedures and application scenarios to the new challenges brought forth by the natural sciences of music.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction

Jean-Julien Aucouturier; Petter Johansson; Lars Hall; Rodrigo Segnini; Lolita Mercadié; Katsumi Watanabe

Significance We created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked toward happiness, sadness, or fear. Independent listeners perceived the transformations as natural examples of emotional speech, but the participants remained unaware of the manipulation, indicating that we are not continuously monitoring our own emotional signals. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed. This result is the first evidence, to our knowledge, of peripheral feedback on emotional experience in the auditory domain. This finding is of great significance, because the mechanisms behind the production of vocal emotion are virtually unknown. Research has shown that people often exert control over their emotions. By modulating expressions, reappraising feelings, and redirecting attention, they can regulate their emotional experience. These findings have contributed to a blurring of the traditional boundaries between cognitive and emotional processes, and it has been suggested that emotional signals are produced in a goal-directed way and monitored for errors like other intentional actions. However, this interesting possibility has never been experimentally tested. To this end, we created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked in the direction of happiness, sadness, or fear. The result showed that the audio transformations were being perceived as natural examples of the intended emotions, but the great majority of the participants, nevertheless, remained unaware that their own voices were being manipulated. This finding indicates that people are not continuously monitoring their own voice to make sure that it meets a predetermined emotional target. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed, which was measured by both self-report and skin conductance level. This change is the first evidence, to our knowledge, of peripheral feedback effects on emotional experience in the auditory domain. As such, our result reinforces the wider framework of self-perception theory: that we often use the same inferential strategies to understand ourselves as those that we use to understand others.


Journal of New Music Research | 2006

Jamming with Plunderphonics: Interactive concatenative synthesis of music

Jean-Julien Aucouturier; François Pachet

Abstract This paper proposes to use the techniques of Concatenative Sound Synthesis in the context of real-time Music Interaction. We describe a system that generates an audio track by concatenating audio segments extracted from pre-existing musical files. The track can be controlled in real-time by specifying high-level properties (or constraints) holding on metadata about the audio segments. A constraint-satisfaction mechanism, based on local search, selects audio segments that best match those constraints at any time. We describe the real-time aspects of the system, notably the asynchronous adding/removing of constraints, and report on several constraints and controllers designed for the system. We illustrate the system with several application examples, notably a virtual drummer able to interact with a human musician in real-time.


Journal of the Acoustical Society of America | 2011

Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

Jean-Julien Aucouturier; Yulri Nonaka; Kentaro Katahira; Kazuo Okanoya

The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants.

Collaboration


Dive into the Jean-Julien Aucouturier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark B. Sandler

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Pascal Belin

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge