Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodrigo Schramm is active.

Publication


Featured researches published by Rodrigo Schramm.


brazilian symposium on computer graphics and image processing | 2004

Rectangle detection based on a windowed Hough transform

Cláudio Rosito Jung; Rodrigo Schramm

The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.


Proceedings of the 2014 International Workshop on Movement and Computing | 2014

Gesture in performance with traditional musical instruments and electronics: Use of embodied music cognition and multimodal motion capture to design gestural mapping strategies

Federico Visi; Rodrigo Schramm; Eduardo Reck Miranda

This paper describes the implementation of gestural mapping strategies for performance with a traditional musical instrument and electronics. The approach adopted is informed by embodied music cognition and functional categories of musical gestures. Within this framework, gestures are not seen as means of control subordinated to the resulting musical sounds but rather as significant elements contributing to the formation of musical meaning similarly to auditory features. Moreover, the ecological knowledge of the gestural repertoire of the instrument is taken into account as it defines the action-sound relationships between the instrument and the performer and contributes to form expectations in the listeners. Subsequently, mapping strategies from a case study of electric guitar performance will be illustrated describing what motivated the choice of a multimodal motion capture system and how different solutions have been adopted considering both gestural meaning formation and technical constraints.


IEEE Transactions on Multimedia | 2015

Dynamic Time Warping for Music Conducting Gestures Evaluation

Rodrigo Schramm; Cláudio Rosito Jung; Eduardo Reck Miranda

Musical performance by an ensemble of performers often requires a conductor. This paper presents a tool to aid the study of basic conducting gestures, also known as meter- mimicking gestures, performed by beginners. It is based on the automatic detection of musical metrics and their subdivisions by analysis of hand gestures. Musical metrics are represented by visual conducting patterns performed by hands, which are tracked using an RGB-D camera. These patterns are recognized and evaluated using a probabilistic framework based on dynamic time warping (DTW). There are two main contributions in this work. Firstly, a new metric is proposed for the DTW, allowing better alignment between two gesture movements without the use of explicit maxima local points. Secondly, the time precision of the conducting gesture is extracted directly from the warping path and its accuracy is evaluated by a confidence measure. Experimental results indicate that the classification scheme represents an improvement over other existing related approaches.


computer music modeling and retrieval | 2015

Analysis of Mimed Violin Performance Movements of Neophytes

Federico Visi; Esther Coorevits; Rodrigo Schramm; Eduardo Reck Miranda

Body movement and embodied knowledge play an important part in how we express and understand music. The gestures of a musician playing an instrument are part of a shared knowledge that contributes to musical expressivity by building expectations and influencing perception. In this study, we investigate the extent in which the movement vocabulary of violin performance is part of the embodied knowledge of individuals with no experience in playing the instrument. We asked people who cannot play the violin to mime a performance along an audio excerpt recorded by an expert. They do so by using a silent violin, specifically modified to be more accessible to neophytes. Preliminary motion data analyses suggest that, despite the individuality of each performance, there is a certain consistency among participants in terms of overall rhythmic resonance with the music and movement in response to melodic phrasing. Individualities and commonalities are then analysed using Functional Principal Component Analysis.


international conference on acoustics, speech, and signal processing | 2014

Temporally coherent stereo matching using kinematic constraints

Rodrigo Schramm; Cláudio Rosito Jung

This paper explores a simple yet effective way to generate temporally coherent disparity maps from binocular video sequences based on kinematic constraints. Given the disparity map at a certain frame, the proposed approach computes the set of possible disparity values for each pixel in the subsequent frame, assuming a maximum displacement constraint (in world coordinates) allowed for each object. These disparity sets are then used to guide the stereo matching procedure in the subsequent frame, generating a temporally coherent disparity map. Experimental results indicate that the proposed approach produces temporally coherent disparity maps comparable to or better than competitive methods.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

Audiovisual Tool for Solfège Assessment

Rodrigo Schramm; Helena de Souza Nunes; Cláudio Rosito Jung

Solfège is a general technique used in the music learning process that involves the vocal performance of melodies, regarding the time and duration of musical sounds as specified in the music score, properly associated with the meter-mimicking performed by hand movement. This article presents an audiovisual approach for automatic assessment of this relevant musical study practice. The proposed system combines the gesture of meter-mimicking (video information) with the melodic transcription (audio information), where hand movement works as a metronome, controlling the time flow (tempo) of the musical piece. Thus, meter-mimicking is used to align the music score (ground truth) with the sung melody, allowing assessment even in time-dynamic scenarios. Audio analysis is applied to achieve the melodic transcription of the sung notes and the solfège performances are evaluated by a set of Bayesian classifiers that were generated from real evaluations done by experts listeners.


computer music modeling and retrieval | 2015

3CMS: An Interactive Decision System for Live Performance

Rodrigo Schramm; Helena de Souza Nunes; Leonardo de Assis Nunes; Federico Visi; Eduardo Reck Miranda

A machine system is designed to analyze the musical aspects during the live performance, allowing an interactive and dynamic flow of new expressions and also opening new compositional forms and multimodal methods. The focus of this approach is to measure the expressiveness from distinct characters during the performance of the musical piece while decisions are made by the machine. This multimodal approach is implemented in the musical piece Tres Microcancoes de Câmara – Essencia Pierrot, Atitude Arlequim, (In)Decisao Colombina (3CMS), where audio features and body motion are used by the algorithm to choose a particular musical ending.


new interfaces for musical expression | 2014

Use of Body Motion to Enhance Traditional Musical Instruments.

Federico Visi; Rodrigo Schramm; Eduardo Reck Miranda


Human Technology | 2017

MUSICAL INSTRUMENTS, BODY MOVEMENT, SPACE, AND MOTION DATA: MUSIC AS AN EMERGENT MULTIMODAL CHOREOGRAPHY

Federico Visi; Esther Coorevits; Rodrigo Schramm; Eduardo Reck Miranda


Audio Engineering Society Conference: 2017 AES International Conference on Semantic Audio | 2017

Automatic Transcription of a Cappella Recordings from Multiple Singers

Rodrigo Schramm; Emmanouil Benetos

Collaboration


Dive into the Rodrigo Schramm's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Federico Visi

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Cláudio Rosito Jung

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Helena de Souza Nunes

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanouil Benetos

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Antoine

Plymouth State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge