Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Bresin is active.

Publication


Featured researches published by Roberto Bresin.


Musicae Scientiae | 2001

Toward a Computational Model of Expression in Music Performance: The GERM Model

Patrik N. Juslin; Anders Friberg; Roberto Bresin

This article presents a computational model of expression in music performance: the GERM model. The purpose of the GERM model is to (a) describe the principal sources of variability in music performance, (b) emphasize the need to integrate different aspects of performance in a common model, and (c) provide some preliminaries (germ = a basis from which a thing may develop) for a computational model that simulates the different aspects. Drawing on previous research on performance, we propose that performance expression derives from four main sources of variability: (1) Generative Rules, which function to convey the generative structure in a musical manner (e.g., Clarke, 1988; Sundberg, 1988); (2) Emotional Expression, which is governed by the performers expressive intention (e.g., Juslin, 1997a); (3) Random Variations, which reflect internal timekeeper variance and motor delay variance (e.g., Gilden, 2001; Wing and Kristofferson, 1973); and (4) Movement Principles, which prescribe that certain features of the performance are shaped in accordance with biological motion (e.g., Shove and Repp, 1995). A preliminary version of the GERM model was implemented by means of computer synthesis. Synthesized performances were evaluated by musically trained participants in a listening test. The results from the test support a decomposition of expression in terms of the GERM model. Implications for future research on music performance are discussed.


IEEE MultiMedia | 2003

Sounding objects

Davide Rocchesso; Roberto Bresin; Mikael Fernström

Interactive systems, virtual environments, and information display applications need dynamic sound models rather than faithful audio reproductions. This implies three levels of research: auditory perception, physics-based sound modeling, and expressive parametric control. Parallel progress along these three lines leads to effective auditory displays that can complement or substitute visual displays. This article aims to shed some light on how psychologists, computer scientists, acousticians, and engineers can work together and address these and other questions arising in sound design for interactive multimedia systems.


PLOS ONE | 2013

A systematic review of mapping strategies for the sonification of physical quantities.

Gaël Dubus; Roberto Bresin

The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.


Journal of New Music Research | 1998

Artificial neural networks based models for automatic performance of musical scores

Roberto Bresin

This article briefly summarises the authors research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hear ...


Journal of New Music Research | 1998

Musical punctuation on the microlevel: Automatic identification and performance of small melodic units*

Anders Friberg; Roberto Bresin; Lars Frydén; Johan Sundberg

In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented th ...


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2009

Sound design and perception in walking interactions

Yon Visell; Federico Fontana; Bruno L. Giordano; Rolf Nordahl; Stefania Serafin; Roberto Bresin

This paper reviews the state of the art in the display and perception of walking generated sounds and tactile vibrations, and their current and potential future uses in interactive systems. As non-visual information sources that are closely linked to human activities in diverse environments, such signals are capable of communicating about the spaces we traverse and activities we encounter in familiar and intuitive ways. However, in order for them to be effectively employed in human-computer interfaces, significant knowledge is required in areas including the perception of acoustic signatures of walking, and the design, engineering, and evaluation of interfaces that utilize them. Much of this expertise has accumulated in recent years, although many questions remain to be explored. We highlight past work and current research directions in this multidisciplinary area of investigation, and point to potential future trends.


Journal of New Music Research | 2000

Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)

Roberto Bresin; G. U. Battel

Articulation strategies applied by pianists in expressive performances of the same score are analysed. Measurements of key overlap time and its relation to the inter-onset-interval are collected for notes marked legato and staccato in the first sixteen bars of the Andante movement of W. A. Mozart’s Piano Sonata in G major, K 545. Five pianists played the piece nine times. First, they played in a way that they considered (optimal). In the remaining eight performances they were asked to represent different expressive characters, as specified in terms of different adjectives. Legato, staccato, and repeated notes articulation applied by the right hand were examined by means of statistical analysis. Although the results varied considerably between pianists, some trends could be observed. The pianists generally used similar strategies in the renderings intended to represent different expressive characters. Legato was played with a key overlap ratio that depended on the inter-onset-interval (IOI). Staccato tones had approximate duration of 40% of the IOI. Repeated notes were played with a duration of about 60% of the IOI. The results seem useful as a basis for articulation rules in grammars for automatic piano performance.


human factors in computing systems | 2008

Sonic interaction design: sound, information and experience

Davide Rocchesso; Stefania Serafin; Frauke Behrendt; Nicola Bernardini; Roberto Bresin; Gerhard Eckel; Karmen Franinovic; Thomas Hermann; Sandra Pauletto; Patrick Susini; Yon Visell

Sonic Interaction Design (SID) is an emerging field that is positioned at the intersection of auditory display, ubiquitous computing, interaction design, and interactive arts. SID can be used to describe practice and inquiry into any of various roles that sound may play in the interaction loop between users and artifacts, services, or environments, in applications that range from the critical functionality of an alarm, to the artistic significance of a musical creation. This field is devoted to the privileged role the auditory channel can assume in exploiting the convergence of computing, communication, and interactive technologies. An over-emphasis on visual displays has constrained the development of interactive systems that are capable of making more appropriate use of the auditory modality. Today the ubiquity of computing and communication resources allows us to think about sounds in a proactive way. This workshop puts a spotlight on such issues in the context of the emerging domain of SID.


Frontiers in Psychology | 2013

Emotional expression in music : Contribution, linearity, and additivity of primary musical cues

Tuomas Eerola; Anders Friberg; Roberto Bresin

The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).


Journal of New Music Research | 2003

Attempts to Reproduce a Pianist's Expressive Timing with Director Musices Performance Rules

Johan Sundberg; Anders Friberg; Roberto Bresin

The Director Musices generative grammar of music performance is a system of context dependent rules that automatically introduces expressive deviation in performances of input score files. A number of these rule concern timing. In this investigation the ability of such rules to reproduce a professional pianist’s timing deviations from nominal note inter-onset-intervals is examined. Rules affecting tone inter-onset-intervals were first tested one by one for the various sections of the excerpt, and then in combinations. Results were evaluated in terms of the correlation between the deviations made by the pianist and by the rule system. It is found that rules reflecting the phrase structure produced high correlations in some sections. On the other hand, some rules failed to produce significant correlation with the pianist’s deviations, and thus seemed irrelevant to the particular performance analysed. It is concluded that phrasing was a prominent principle in this performance and that rule combinations have to change between sections in order to match this pianist’s deviations.

Collaboration


Dive into the Roberto Bresin's collaboration.

Top Co-Authors

Avatar

Anders Friberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gaël Dubus

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marco Fabiani

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ludvig Elblaus

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emma Frid

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Davide Rocchesso

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge