Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where György Fazekas is active.

Publication


Featured researches published by György Fazekas.


Journal of New Music Research | 2010

An Overview of Semantic Web Activities in the OMRAS2 Project

György Fazekas; Yves Raimond; Kurt Jacobson; Mark B. Sandler

Abstract The use of cultural information is becoming increasingly important in music information research, especially in music retrieval and recommendation. While this information is widely available on the Web, it is most commonly published using proprietary Web Application Programming Interfaces (APIs). The Linked Data community is aiming at resolving the incompatibilities between these diverse data sources by building a Web of data using Semantic Web technologies. The OMRAS2 project has made several important contributions to this by developing an ontological framework and numerous software tools, as well as publishing music related data on the Semantic Web. These data and tools have found their use even beyond their originally intended scope. In this paper, we first provide a broad overview of the Semantic Web technologies underlying this work. We describe the Music Ontology, an open-ended framework for communicating musical information on the Web, and show how this framework can be extended to describe specific sub-domains such as music similarity, content-based audio features, musicological data and studio production. We describe several data-sets that have been published and data sources that have been adapted using this framework. Finally, we provide application examples ranging from software libraries to end user Web applications.


computer music modeling and retrieval | 2012

Music Emotion Recognition: From Content- to Context-Based Models

Mathieu Barthet; György Fazekas; Mark B. Sandler

The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from different disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, affective computing, semantic technologies and music information retrieval.


international conference on multimedia and expo | 2013

Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags

Pasi Saari; Mathieu Barthet; György Fazekas; Tuomas Eerola; Mark B. Sandler

Social media services such as Last.fm provide crowd-sourced mood tags which are a rich but often noisy source of information. In contrast, editorial annotations from production music libraries are meant to be incisive in nature. We compare the efficiency of these two data sources in capturing semantic information on mood expressed by music. First, a semantic computing technique devised for mood-related tags in large datasets is applied to Last.fm and I Like Music (ILM) corpora separately (250,000 tracks each). The resulting semantic estimates are then correlated with listener ratings of arousal, valence and tension. High correlations (Spearmans rho) are found between the track positions in the dimensional mood spaces and listener ratings using both data sources (0.60 <; rs <; 0.70). In addition, the use of curated editorial data provides a statistically significant improvement compared to crowd-sourced data for predicting moods perceived in music.


IEEE Transactions on Affective Computing | 2016

Genre-Adaptive Semantic Computing and Audio-Based Modelling for Music Mood Annotation

Pasi Saari; György Fazekas; Tuomas Eerola; Mathieu Barthet; Olivier Lartillot; Mark B. Sandler

This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outperforms a semantic computing technique that does not exploit genre information, and ACTwg-SLPwg outperforms conventional techniques and other genre-adaptive alternatives. In particular, improvements in the prediction rates are obtained for the valence dimension which is typically the most challenging core affect dimension for audio-based annotation. The specificity of genre categories is not crucial for the performance of ACTwg-SLPwg. The study also presents analytical insights into inferring a concise tag-based genre representation for genre-adaptive music mood analysis.


ieee international conference semantic computing | 2016

The Mobile Audio Ontology: Experiencing Dynamic Music Objects on Mobile Devices

Florian Thalmann; Alfonso Pérez Carrillo; György Fazekas; Geraint A. Wiggins; Mark B. Sandler

This paper is about the Mobile Audio Ontology, a semantic audio framework for the design of novel music consumption experiences on mobile devices. The framework is based on the concept of the Dynamic Music Object which is an amalgamation of audio files, structural and analytical information extracted from the audio, and information about how it should be rendered in realtime. The Mobile Audio Ontology allows producers and distributors to specify a great variety of ways of playing back music in controlled indeterministic as well as adaptive and interactive ways. Users can map mobile sensor data, user interface controls, or autonomous control units hidden from the listener to any musical parameter exposed in the definition of a Dynamic Music Object. These mappings can also be made dependent on semantic and analytical information extracted from the audio.


IEEE Transactions on Audio, Speech, and Language Processing | 2013

Automatic Ontology Generation for Musical Instruments Based on Audio Analysis

Sefki Kolozali; Mathieu Barthet; György Fazekas; Mark B. Sandler

In this paper we present a novel hybrid system that involves a formal method of automatic ontology generation for web-based audio signal processing applications. An ontology is seen as a knowledge management structure that represents domain knowledge in a machine interpretable format. It describes concepts and relationships within a particular domain, in our case, the domain of musical instruments. However, the different tasks of ontology engineering including manual annotation, hierarchical structuring and organization of data can be laborious and challenging. For these reasons, we investigate how the process of creating ontologies can be made less dependent on human supervision by exploring concept analysis techniques in a Semantic Web environment. In this study, various musical instruments, from wind to string families, are classified using timbre features extracted from audio. To obtain models of the analysed instrument recordings, we use K-means clustering to determine an optimised codebook of Line Spectral Frequencies (LSFs), or Mel-frequency Cepstral Coefficients (MFCCs). Two classification techniques based on Multi-Layer Perceptron (MLP) neural network and Support Vector Machines (SVM) were tested. Then, Formal Concept Analysis (FCA) is used to automatically build the hierarchical structure of musical instrument ontologies. Finally, the generated ontologies are expressed using the Ontology Web Language (OWL). System performance was evaluated under natural recording conditions using databases of isolated notes and melodic phrases. Analysis of Variance (ANOVA) were conducted with the feature and classifier attributes as independent variables and the musical instrument recognition F-measure as dependent variable. Based on these statistical analyses, a detailed comparison between musical instrument recognition models is made to investigate their effects on the automatic ontology generation system. The proposed system is general and also applicable to other research fields that are related to ontologies and the Semantic Web.


international conference on acoustics, speech, and signal processing | 2016

Hybrid music recommender using content-based and social information

Paulo Chiliguano; György Fazekas

Internet resources available today, including songs, albums, playlists or podcasts, that a user cannot discover if there is not a tool to filter the items that the user might consider relevant. Several recommendation techniques have been developed since the Internet explosion to achieve this filtering task. In an attempt to recommend relevant songs to users, we propose an hybrid recommender that considers real-world users information and high-level representation for audio data. We use a deep learning technique, convolutional deep neural networks, to represent an audio segment in a n-dimensional vector, whose dimensions define the probability of the segment to belong to a specific music genre. To capture the listening behavior of a user, we investigate a state-of-the-art technique, estimation of distribution algorithms. The designed hybrid music recommender outperforms the predictions compared with a traditional content-based recommender.


audio mostly conference | 2015

Moodplay: an interactive mood-based musical experience

Mathieu Barthet; György Fazekas; Alo Allik; Mark B. Sandler

Moodplay is a system that allows users to collectively control music and lighting effects to express desired emotions. The interaction is based on the Mood Conductor participatory performance system that uses web, data visualisation and affective computing technologies. We explore how artificial intelligence, semantic web and audio synthesis can be combined to provide new personalised and immersive musical experiences. Participants can choose degrees of energy and pleasantness to shape the music played using a web interface. Semantic Web technologies have been embedded in the system to query mood coordinates from a triple store using a SPARQL endpoint and to connect to external linked data sources for metadata.


computer music modeling and retrieval | 2013

Novel Methods in Facilitating Audience and Performer Interaction Using the Mood Conductor Framework

György Fazekas; Mathieu Barthet; Mark B. Sandler

While listeners’ emotional response to music is the subject of numerous studies, less attention is paid to the dynamic emotion variations due to the interaction between artists and audiences in live improvised music performances. By opening a direct communication channel from audience members to performers, the Mood Conductor system provides an experimental framework to study this phenomenon. Mood Conductor facilitates interactive performances and thus also has an inherent entertainment value. The framework allows audience members to send emotional directions using their mobile devices in order to “conduct” improvised performances. Emotion coordinates indicted by the audience in the arousal-valence space are aggregated and clustered to create a video projection. This is used by the musicians as guidance, and provides visual feedback to the audience. Three different systems were developed and tested within our framework so far. These systems were trialled in several public performances with different ensembles. Qualitative and quantitative evaluations demonstrated that musicians and audiences were highly engaged with the system, and raised new insights enabling future improvements of the framework.


metadata and semantics research | 2016

MusicWeb: Music discovery with open linked semantic metadata

Mariano Mora-Mcginity; Alo Allik; György Fazekas; Mark B. Sandler

This paper presents MusicWeb, a novel platform for music discovery by linking music artists within a web-based application. MusicWeb provides a browsing experience using connections that are either extra-musical or tangential to music, such as the artists’ political affiliation or social influence, or intra-musical, such as the artists’ main instrument or most favoured musical key. The platform integrates open linked semantic metadata from various Semantic Web, music recommendation and social media data sources. Artists are linked by various commonalities such as style, geographical location, instrumentation, record label as well as more obscure categories, for instance, artists who have received the same award, have shared the same fate, or belonged to the same organisation. These connections are further enhanced by thematic analysis of journal articles, blog posts and content-based similarity measures focussing on high level musical categories.

Collaboration


Dive into the György Fazekas's collaboration.

Top Co-Authors

Avatar

Mark B. Sandler

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Mathieu Barthet

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Thomas Wilmering

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Alo Allik

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Simon Dixon

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Beici Liang

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Florian Thalmann

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Sefki Kolozali

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Brecht De Man

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Dawn A. A. Black

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge