Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mathieu Barthet is active.

Publication


Featured researches published by Mathieu Barthet.


computer music modeling and retrieval | 2012

Music Emotion Recognition: From Content- to Context-Based Models

Mathieu Barthet; György Fazekas; Mark B. Sandler

The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from different disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, affective computing, semantic technologies and music information retrieval.


international conference on multimedia and expo | 2013

Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags

Pasi Saari; Mathieu Barthet; György Fazekas; Tuomas Eerola; Mark B. Sandler

Social media services such as Last.fm provide crowd-sourced mood tags which are a rich but often noisy source of information. In contrast, editorial annotations from production music libraries are meant to be incisive in nature. We compare the efficiency of these two data sources in capturing semantic information on mood expressed by music. First, a semantic computing technique devised for mood-related tags in large datasets is applied to Last.fm and I Like Music (ILM) corpora separately (250,000 tracks each). The resulting semantic estimates are then correlated with listener ratings of arousal, valence and tension. High correlations (Spearmans rho) are found between the track positions in the dimensional mood spaces and listener ratings using both data sources (0.60 <; rs <; 0.70). In addition, the use of curated editorial data provides a statistically significant improvement compared to crowd-sourced data for predicting moods perceived in music.


IEEE Transactions on Affective Computing | 2016

Genre-Adaptive Semantic Computing and Audio-Based Modelling for Music Mood Annotation

Pasi Saari; György Fazekas; Tuomas Eerola; Mathieu Barthet; Olivier Lartillot; Mark B. Sandler

This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outperforms a semantic computing technique that does not exploit genre information, and ACTwg-SLPwg outperforms conventional techniques and other genre-adaptive alternatives. In particular, improvements in the prediction rates are obtained for the valence dimension which is typically the most challenging core affect dimension for audio-based annotation. The specificity of genre categories is not crucial for the performance of ACTwg-SLPwg. The study also presents analytical insights into inferring a concise tag-based genre representation for genre-adaptive music mood analysis.


Archive | 2013

From Sounds to Music and Emotions

Mitsuko Aramaki; Mathieu Barthet; Richard Kronland-Martinet; Sølvi Ystad

The constant growth of online music dataset and applica- tions has required advances in MIR Research. Music genres and anno- tated mood have received much attention in the last decades as descrip- tors of content-based systems. However, their inherent relationship is rarely explored in the literature. Here, we investigate whether or not the presence of tonal and rhythmic motifs in the melody can be used for establishing a relationship between genres and subjective aspects such as the mood,dynamism and emotion. Our approach uses symbolic rep- resentation of music and is applied to eight dierent genres.


IEEE Transactions on Audio, Speech, and Language Processing | 2013

Automatic Ontology Generation for Musical Instruments Based on Audio Analysis

Sefki Kolozali; Mathieu Barthet; György Fazekas; Mark B. Sandler

In this paper we present a novel hybrid system that involves a formal method of automatic ontology generation for web-based audio signal processing applications. An ontology is seen as a knowledge management structure that represents domain knowledge in a machine interpretable format. It describes concepts and relationships within a particular domain, in our case, the domain of musical instruments. However, the different tasks of ontology engineering including manual annotation, hierarchical structuring and organization of data can be laborious and challenging. For these reasons, we investigate how the process of creating ontologies can be made less dependent on human supervision by exploring concept analysis techniques in a Semantic Web environment. In this study, various musical instruments, from wind to string families, are classified using timbre features extracted from audio. To obtain models of the analysed instrument recordings, we use K-means clustering to determine an optimised codebook of Line Spectral Frequencies (LSFs), or Mel-frequency Cepstral Coefficients (MFCCs). Two classification techniques based on Multi-Layer Perceptron (MLP) neural network and Support Vector Machines (SVM) were tested. Then, Formal Concept Analysis (FCA) is used to automatically build the hierarchical structure of musical instrument ontologies. Finally, the generated ontologies are expressed using the Ontology Web Language (OWL). System performance was evaluated under natural recording conditions using databases of isolated notes and melodic phrases. Analysis of Variance (ANOVA) were conducted with the feature and classifier attributes as independent variables and the musical instrument recognition F-measure as dependent variable. Based on these statistical analyses, a detailed comparison between musical instrument recognition models is made to investigate their effects on the automatic ontology generation system. The proposed system is general and also applicable to other research fields that are related to ontologies and the Semantic Web.


human factors in computing systems | 2016

A Participatory Live Music Performance with the Open Symphony System

Kate Hayes; Mathieu Barthet; Yongmeng Wu; Leshao Zhang; Nick Bryan-Kinns

Our Open Symphony system reimagines the music experience for a digital age, fostering alliances between performer and audience and our digital selves. Open Symphony enables live participatory music performance where the audience actively engages in the music creation process. This is made possible by using state-of-the-art web technologies and data visualisation techniques. Through collaborations with local performers we will conduct a series of interactive music performance revolutionizing the performance experience both for performers and audiences. The system throws open music-creating possibilities to every participant and is a genuine novel way to demonstrate the field of Human Computer Interaction through computer-supported cooperative creation and multimodal music and visual perception.


audio mostly conference | 2015

Moodplay: an interactive mood-based musical experience

Mathieu Barthet; György Fazekas; Alo Allik; Mark B. Sandler

Moodplay is a system that allows users to collectively control music and lighting effects to express desired emotions. The interaction is based on the Mood Conductor participatory performance system that uses web, data visualisation and affective computing technologies. We explore how artificial intelligence, semantic web and audio synthesis can be combined to provide new personalised and immersive musical experiences. Participants can choose degrees of energy and pleasantness to shape the music played using a web interface. Semantic Web technologies have been embedded in the system to query mood coordinates from a triple store using a SPARQL endpoint and to connect to external linked data sources for metadata.


computer music modeling and retrieval | 2013

Novel Methods in Facilitating Audience and Performer Interaction Using the Mood Conductor Framework

György Fazekas; Mathieu Barthet; Mark B. Sandler

While listeners’ emotional response to music is the subject of numerous studies, less attention is paid to the dynamic emotion variations due to the interaction between artists and audiences in live improvised music performances. By opening a direct communication channel from audience members to performers, the Mood Conductor system provides an experimental framework to study this phenomenon. Mood Conductor facilitates interactive performances and thus also has an inherent entertainment value. The framework allows audience members to send emotional directions using their mobile devices in order to “conduct” improvised performances. Emotion coordinates indicted by the audience in the arousal-valence space are aggregated and clustered to create a video projection. This is used by the musicians as guidance, and provides visual feedback to the audience. Three different systems were developed and tested within our framework so far. These systems were trialled in several public performances with different ensembles. Qualitative and quantitative evaluations demonstrated that musicians and audiences were highly engaged with the system, and raised new insights enabling future improvements of the framework.


computer music modeling and retrieval | 2010

Speech/music discrimination in audio podcast using structural segmentation and timbre recognition

Mathieu Barthet; Steven Hargreaves; Mark B. Sandler

We propose two speech/music discrimination methods using timbre models and measure their performances on a 3 hour long database of radio podcasts from the BBC. In the first method, the machine estimated classifications obtained with an automatic timbre recognition (ATR) model are post-processed using median filtering. The classification system (LSF/K-means) was trained using two different taxonomic levels, a high-level one (speech, music), and a lower-level one (male and female speech, classical, jazz, rock & pop). The second method combines automatic structural segmentation and timbre recognition (ASS/ATR). The ASS evaluates the similarity between feature distributions (MFCC, RMS) using HMM and soft K-means algorithms. Both methods were evaluated at a semantic (relative correct overlap RCO), and temporal (boundary retrieval F-measure) levels. The ASS/ATR method obtained the best results (average RCO of 94.5% and boundary F-measure of 50.1%). These performances were favourably compared with that obtained by a SVM-based technique providing a good benchmark of the state of the art.


Proceedings of the 1st International Workshop on Digital Libraries for Musicology | 2014

Big Data for Musicology

Tillman Weyde; Stephen Cottrell; Jason Dykes; Emmanouil Benetos; Daniel Wolff; Dan Tidhar; Alexander Kachkaev; Mark D. Plumbley; Simon Dixon; Mathieu Barthet; Nicolas Gold; Samer A. Abdallah; Aquiles Alancar-Brayner; Mahendra Mahey; Adam Tovell

Digital music libraries and collections are growing quickly and are increasingly made available for research. We argue that the use of large data collections will enable a better understanding of music performance and music in general, which will benefit areas such as music search and recommendation, music archiving and indexing, music production and education. However, to achieve these goals it is necessary to develop new musicological research methods, to create and adapt the necessary technological infrastructure, and to find ways of working with legal limitations. Most of the necessary basic technologies exist, but they need to be brought together and applied to musicology. We aim to address these challenges in the Digital Music Lab project, and we feel that with suitable methods and technology Big Music Data can provide new opportunities to musicology.

Collaboration


Dive into the Mathieu Barthet's collaboration.

Top Co-Authors

Avatar

Mark B. Sandler

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

György Fazekas

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Sefki Kolozali

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sølvi Ystad

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Dykes

City University London

View shared research outputs
Top Co-Authors

Avatar

Luca Turchet

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge