Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Rodà is active.

Publication


Featured researches published by Antonio Rodà.


Journal of New Music Research | 1998

Note‐by‐note analysis of the influence of expressive intentions and musical structure in violin performance*

Giovanni De Poli; Antonio Rodà; Alvise Vidolin

Abstract This paper describes an analysis of how the performance of a score differs when musicians are requested to play with differing expression. A professional violinist was asked to play short pieces of music in different versions expressing light, heavy, soft, hard, bright, and dark. For comparison, a normal, standard performance was recorded. Note‐by‐note analysis allowed us to identify the variations of the main acoustic parameters as a consequence of varying the expressive intentions. It was possible to identify two distinct expressive sources. The first refers the musical structure of the period, its division into phrases and the continuous alternation of tension and relaxation points. The second depends on the expressive intentions that the musician wants to convey to the listeners.


Journal of New Music Research | 2003

An Abstract Control Space for Communication of Sensory Expressive Intentions in Music Performance

Sergio Canazza; Giovanni De Poli; Antonio Rodà; Alvise Vidolin

Expressiveness is not an extravagance: instead, expressiveness plays a critical role in rational decision-making, in perception, in human interaction, in human emotions and in human intelligence. These facts, combined with the development of new informatics systems able to recognize and understand different kinds of signals, open new areas for research. A new model is suggested for computer understanding of sensory expressive intentions of a human performer and both theoretical and practical applications are described for human-computer interaction, perceptual information retrieval, creative arts and entertainment. Recent studies demonstrated that by opportunely modifying systematic deviations introduced by the musician it is possible to convey different sensitive contents, such as expressive intentions and/or emotions. We present an space, that can be used as a user interface. It represents, at an abstract level, the expressive content and the interaction between the performer and an expressive synthesizer.


Proceedings of the IEEE | 2004

Modeling and control of expressiveness in music performance

Sergio Canazza; G. De Poli; Carlo Drioli; Antonio Rodà; Alvise Vidolin

Expression is an important aspect of music performance. It is the added value of a performance and is part of the reason that music is interesting to listen to and sounds alive. Understanding and modeling expressive content communication is important for many engineering applications in information technology. For example, in multimedia products, textual information is enriched by means of graphical and audio objects. In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, both at the symbolic and signal levels. To this purpose, we discuss a model that applies a smooth morphing among performances with different expressive content, adapting the audio expressive character to the users desires. Morphing can be realized with a wide range of graduality (from abrupt to very smooth), allowing adaptation of the system to different situations. The sound rendering is obtained by interfacing the expressiveness model with a dedicated postprocessing environment, which allows for the transformation of the event cues. The processing is based on the organized control of basic audio effects. Among the basic effects used, an original method for the spectral processing of audio is introduced.


IEEE MultiMedia | 2000

Audio Morphing Different Expressive Intentions for Multimedia Systems

Sergio Canazza; Giovanni De Poli; Carlo Drioli; Antonio Rodà; Alvise Vidolin

Web Extras: Sample audio files and view a demo of the audio authoring tool.Download Real Jukebox for listening to the mp3 filesSonatina in sol (by Beethoven) played neutral (without any expressive intentions)Expressive performance of Sonatina in sol generated by the model in a symbolic way (that is, as a MIDI file)Sonata K545 (by Mozart) played neutral (without any expressive intentions)Expressive performance of Sonata K545 generated by the model in a symbolic way (that is, as a MIDI file)Expressive performance of Sonata in A Major Op. V(by Corelli) generated by the audio authoring tool (using the audio postprocessing tool)


Computational Intelligence and Neuroscience | 2013

On the role of auditory feedback in robot-assisted movement training after stroke: review of the literature

Giulio Rosati; Antonio Rodà; Federico Avanzini; Stefano Masiero

The goal of this paper is to address a topic that is rarely investigated in the literature of technology-assisted motor rehabilitation, that is, the integration of auditory feedback in the rehabilitation device. After a brief introduction on rehabilitation robotics, the main concepts of auditory feedback are presented, together with relevant approaches, techniques, and technologies available in this domain. Current uses of auditory feedback in the context of technology-assisted rehabilitation are then reviewed. In particular, a comparative quantitative analysis over a large corpus of the recent literature suggests that the potential of auditory feedback in rehabilitation systems is currently and largely underexploited. Finally, several scenarios are proposed in which the use of auditory feedback may contribute to overcome some of the main limitations of current rehabilitation systems, in terms of user engagement, development of acute-phase and home rehabilitation devices, learning of more complex motor tasks, and improving activities of daily living.


Journal of New Music Research | 2006

Communicating expressive intentions with a single piano note

Filippo Bonini Baraldi; Giovanni De Poli; Antonio Rodà

Abstract We analysed how expressive intentions are communicated and perceived in a special context of musical production: improvisation on a single piano note. Two experiments were designed in order to find relations between performers expressive intentions, four acoustical parameters (pitch, intensity, articulation and rhythmic density), and listeners perception of expressive content. Differences between musicians and non-musicians were analysed as well. In the first experiment, 6 performers (3 musicians and 3 non-musicians) improvised on a digital piano according to 8 expressive intentions. The experiment was planned in 4 phases, progressively limiting the musical means available to the performer. In all phases, improvisations were limited to only one piano note. In the second experiment, listeners described performers improvisations by means of adjective ratings. Results support the position that few low level parameters, mainly intensity and rhythmic density, are important factors in the communication of expressive content from the performer to the listener and that listeners recognize most expressive intentions even when very few acoustical parameters are used.


acm multimedia | 2010

Toward an automatically generated soundtrack from low-level cross-modal correlations for automotive scenarios

Marco Cristani; Anna Pesarin; Carlo Drioli; Vittorio Murino; Antonio Rodà; Michele Grapulin; Nicu Sebe

In this paper, we propose a novel recommendation policy for driving scenarios. While driving a car, listening to an audio track may enrich the atmosphere, conveying emotions that let the driver sense a more arousing experience. Here, we are introducing a recommendation policy that, given a video sequence taken by a camera mounted onboard a car, chooses the most suitable audio piece from a predetermined set of melodies. The mixing mechanism takes inspiration from a set of generic qualitative aesthetical rules for cross-modal linking, realized by associating audio and video features. The contribution of this paper is to translate such qualitative rules into quantitative terms, learning from an extensive training dataset cross-modal statistical correlations, and validating them in a thoroughly way. In this way, we are able to define what are the audio and video features that correlate at best (i.e., promoting or rejecting some aesthetical rules), and what are their correlation intensities. This knowledge is then employed for the realization of the recommendation policy. A set of user studies illustrate and validate the policy, thus encouraging further developments toward a real implementation in an automotive application.


Entertainment Computing | 2013

Entertaining listening by means of the Stanza Logo-Motoria: an Interactive Multimodal Environment

Serena Zanolla; Sergio Canazza; Antonio Rodà; Antonio Camurri; Gualtiero Volpe

Abstract This article presents an Interactive Multimodal Environment (IME), the Stanza Logo-Motoria, designed to support learning in primary schools. In particular we describe the use of this system as a tool (a) to practice listening to English as a Second Language (ESL) and (b) to enable children with severe disabilities to perform an interactive listening. We document the ongoing experimentation of the Stanza Logo-Motoria in ESL lessons and report its encouraging results. Moreover, we explain how it may be possible, by means of the Stanza Logo-Motoria, to redesign traditional learning environments in order to allow pupils to experience listening as an active and engaging experience.


IEEE Transactions on Affective Computing | 2014

Clustering Affective Qualities of Classical Music: Beyond the Valence-Arousal Plane

Antonio Rodà; Sergio Canazza; Giovanni De Poli

The important role of the valence and arousal dimensions in representing and recognizing affective qualities in music is well established. There is less evidence for the contribution of secondary dimensions such as potency, tension and energy. In particular, previous studies failed to find significant relations between computable musical features and affective dimensions other than valence and arousal. Here we present two experiments aiming at assessing how musical features, directly computable from complex audio excerpts, are related to secondary emotion dimensions. To this aim, we imposed some constraints on the musical features, namely modality and tempo, of the stimuli.The results show that although arousal and valence dominate for many musical features, it is possible to identify features, in particular Roughness, Loudness, and SpectralFlux, that are significantly related to the potency dimension. As far as we know, this is the first study that gained more insight into the affective potency in the music domain by using real music recordings and a computational approach.


tests and proofs | 2015

The Role of Individual Difference in Judging Expressiveness of Computer-Assisted Music Performances by Experts

Giovanni De Poli; Sergio Canazza; Antonio Rodà; Emery Schubert

Computational systems for generating expressive musical performances have been studied for several decades now. These models are generally evaluated by comparing their predictions with actual performances, both from a performance parameter and a subjective point of view, often focusing on very specific aspects of the model. However, little is known about how listeners evaluate the generated performances and what factors influence their judgement and appreciation. In this article, we present two studies, conducted during two dedicated workshops, to start understanding how the audience judges entire performances employing different approaches to generating musical expression. In the preliminary study, 40 participants completed a questionnaire in response to five different computer-generated and computer-assisted performances, rating preference and describing the expressiveness of the performances. In the second, “GATM” (Gruppo di Analisi e Teoria Musicale) study, 23 participants also completed the Music Cognitive Style questionnaire. Results indicated that music systemizers tend to describe musical expression in terms of the formal aspects of the music, and music empathizers tend to report expressiveness in terms of emotions and characters. However, high systemizers did not differ from high empathizers in their mean preference score across the five pieces. We also concluded that listeners tend not to focus on the basic technical aspects of playing when judging computer-assisted and computer-generated performances. Implications for the significance of individual differences in judging musical expression are discussed.

Collaboration


Dive into the Antonio Rodà's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emery Schubert

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge