Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Leman is active.

Publication


Featured researches published by Marc Leman.


Proceedings of the IEEE | 2008

Content-Based Music Information Retrieval: Current Directions and Future Challenges

Michael A. Casey; Remco C. Veltkamp; Masataka Goto; Marc Leman; Christophe Rhodes; Malcolm Slaney

The steep rise in music downloading over CD sales has created a major shift in the music industry away from physical media formats and towards online products and services. Music is one of the most popular types of online information and there are now hundreds of music streaming and download services operating on the World-Wide Web. Some of the music collections available are approaching the scale of ten million tracks and this has posed a major challenge for searching, retrieving, and organizing music content. Research efforts in music information retrieval have involved experts from music perception, cognition, musicology, engineering, and computer science engaged in truly interdisciplinary activity that has resulted in many proposed algorithmic and methodological solutions to music search using content-based methods. This paper outlines the problems of content-based music information retrieval and explores the state-of-the-art methods using audio cues (e.g., query by humming, audio fingerprinting, content-based music retrieval) and other cues (e.g., music notation and symbolic representation), and identifies some of the major challenges for the coming years.


IEEE MultiMedia | 2005

Communicating expressiveness and affect in multimodal interactive systems

Antonio Camurri; Gualtiero Volpe; G. De Poli; Marc Leman

Multisensory integrated expressive environments is a framework for mixed reality applications in the performing arts such as interactive dance, music, or video installations. MIEEs address the expressive aspects of nonverbal human communication. We present the multilayer conceptual framework of MIEEs, algorithms for expressive content analysis and processing, and MIEEs-based art applications.


Music Perception: An Interdisciplinary Journal | 2000

An Auditory Model of the Role of Short-Term Memory in Probe-Tone Ratings

Marc Leman

Auditory modeling is used to investigate the role of short-term memory in probe-tone experiments. A framework for auditory modeling is first defined, based on a distinction between auditory images, processes, and stimulus-driven inferences. Experiments I and II of the probe-tone experiments described by C. Krumhansl and E. Kessler (1982) are simulated. The results show that a short-term memory model, working on echoic images of periodicity pitch, may account for the probe-tone ratings. The simulations challenge the claim that probe-tone experiments provide evidence that listeners familiar with Western music have abstracted tonal hierarchies in a long-term memory.


Frontiers in Psychology | 2014

Action-based effects on music perception

Pieter-Jan Maes; Marc Leman; Caroline Palmer; Marcelo M. Wanderley

The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance.


PLOS ONE | 2013

Activating and relaxing music entrains the speed of beat synchronized walking

Marc Leman; Dirk Moelants; Matthias Varewyck; Frederik Styns; Leon van Noorden; Jean-Pierre Martens

Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.


Musical gestures : sound, movement, and meaning | 2010

Musical gestures: Concepts and methods in research

Alexander Refsum Jensenius; Marcelo M. Wanderley; Rolf Inge Godøy; Marc Leman

In the last decade, cognitive science underwent a change of paradigm by bringing human movement into the focus of research. Concepts such as ‘embodiment’ and ‘enactive’ have been proposed as core concepts reflecting the role of the human body in complex processes such as action and perception, and the interaction of mind and physical environment (Varela et al., 1991; Noë, 2004). In music research, human movement has often been related with the notion of gesture. The reason is that many musical activities (performance, conducting, dancing) involve body movements that evoke meanings, and therefore these movements are called gestures. In Camurri et al. (2005), musical gestures are addressed from the viewpoint of their expressive character. However, there are many ways in which music-related body movements can be approached, measured, described and applied. Accordingly, there are many ways in which musical gestures are meaningful. Given the different contexts in which gestures appear, and their close relationship to movement and meaning, one may be tempted to say that the notion of gesture is too broad, ill-defined and perhaps too vague. Yet the use of this notion is very convenient in modern music research, because it allows making a bridge between movement and meaning. A closer look at the term gesture reveals its potential as a core notion that provides access to central issues in action/perception processes and in mind/environment interactions.


Journal of New Music Research | 2005

Prediction of Musical Affect Using a Combination of Acoustic Structural Cues

Marc Leman; Valery Vermeulen; Liesbeth De Voogdt; Dirk Moelants; Micheline Lesaffre

This study explores whether musical affect attribution can be predicted by a linear combination of acoustical structural cues. To that aim, a database of sixty musical audio excerpts was compiled and analyzed at three levels: judgments of affective content by subjects; judgments of structural content by musicological experts (i.e., “manual structural cues”), and extraction of structural content by an auditory-based computer algorithm (called: acoustical structural cues). In Study I, an affect space was constructed with Valence (gay-sad), Activity (tender-bold) and Interest (exciting-boring) as the main dimensions, using the responses of a hundred subjects. In Study II manual and acoustical structural cues were analyzed and compared. Manual structural cues such as loudness and articulation could be accounted for in terms of a combination of acoustical structural cues. In Study III, the subjective responses of eight individual subjects were analyzed using the affect space obtained in Study I, and modeled in terms of the structural cues obtained in Study II, using linear regression modeling. This worked better for the Activity dimension than for the Valence dimension, while the Interest dimension could not be accounted for. Overall, manual structural cues worked better than acoustical structural cues. In a final assessment study, a selected set of acoustical structural cues was used for building prediction models. The results indicate that musical affect attribution can partly be predicted using a combination of acoustical structural cues. Future research may focus on non-linear approaches, elaboration of dataset and subjects, and refinement of acoustical structural cue extraction.


Signal Processing | 2010

Access to ethnic music: Advances and perspectives in content-based music information retrieval

Olmo Cornelis; Micheline Lesaffre; Dirk Moelants; Marc Leman

Access to digital music collections is nowadays facilitated by content-based methods that allow the retrieval of music on the basis of intrinsic properties of audio, in addition to advanced metadata processing. However, access to ethnic music remains problematic, as this music does not always correspond to the Western concepts that underlie the currently available content-based methods. In this paper, we examine the literature on access to ethnic music, while focusing on the reasons why the existing techniques fail or fall short of expectations and what can be done about it. The paper considers a review of the work on signals and feature extraction, on symbolic and semantic information processing, and on metadata and context tools. An overview is given of several European ethnic music archives and related ongoing research projects. Problems are highlighted and suggestions of the ways in which to improve access to ethnic music collections are given.


Journal of New Music Research | 1994

Schema-based tone center recognition of musical signals

Marc Leman

Abstract This paper presents a model of schema‐based tone center recognition of musical signals. A distinction is made between passive and active schema‐based recognition. Passive recognition assumes the use of a schema as a template, that is, a pattern with which the new information is correlated. Active recognition assumes an active role of the schema itself. The model is part of a theory of musical morphology whose aim is to provide an operational account of music cognition in terms of physiological acoustics (psychoacoustics) and self‐organization theory (dynamic systems theory). The model is an example of nonsymbolic research in music imagination and has applications for music analysis as well as interactive music making.


Journal of the Acoustical Society of America | 2006

Model-based sound synthesis of the guqin

Henri Penttinen; Jyri Pakarinen; Vesa Välimäki; Mikael Laurson; Henbing Li; Marc Leman

This paper presents a model-based sound synthesis algorithm for the Chinese plucked string instrument called the guqin. The instrument is fretless, which enables smooth pitch glides from one note to another. A version of the digital waveguide synthesis approach is used, where the string length is time-varying and its energy is scaled properly. A body model filter is placed in cascade with the string model. Flageolet tones are synthesized with the so-called ripple filter structure, which is an FIR comb filter in the delay line of a digital waveguide model. In addition, signal analysis of recorded guqin tones is presented. Friction noise produced by gliding the finger across the soundboard has a harmonic structure and is proportional to the gliding speed. For pressed tones, one end of a vibrating string is terminated either by the nail of the thumb or a fingertip. The tones terminated with a fingertip decay faster than those terminated with a thumb. Guqin tones are slightly inharmonic and they exhibit phantom partials. The synthesis model takes into account these characteristic features of the instrument and is able to reproduce them. The synthesis model will be used for rule based synthesis of guqin music.

Collaboration


Dive into the Marc Leman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge