Panos Papiotis
Pompeu Fabra University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Panos Papiotis.
Journal of New Music Research | 2014
Marco Marchini; Rafael Ramirez; Panos Papiotis; Esteban Maestre
Computational approaches for modelling expressive music performance have produced systems that emulate music expression, but few steps have been taken in the domain of ensemble performance. In this paper, we propose a novel method for building computational models of ensemble expressive performance and show how this method can be applied for deriving new insights about collaboration among musicians. In order to address the problem of inter-dependence among musicians we propose the introduction of inter-voice contextual attributes. We evaluate the method on data extracted from multi-modal recordings of string quartet performances in two different conditions: solo and ensemble. We used machine-learning algorithms to produce computational models for predicting intensity, timing deviations, vibrato extent, and bowing speed of each note. As a result, the introduced inter-voice contextual attributes generally improved the prediction of the expressive parameters. Furthermore, results on attribute selection show that the models trained on ensemble recordings took more advantage of inter-voice contextual attributes than those trained on solo recordings.
Frontiers in Psychology | 2014
Panos Papiotis; Marco Marchini; Alfonso Perez-Carrillo; Esteban Maestre
In a musical ensemble such as a string quartet, the musicians interact and influence each others actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble), acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre, and Tempo. Four different interdependence estimation methods (two linear and two nonlinear) were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated.
ieee virtual reality conference | 2017
Ilias Bergstrom; Sergio Azevedo; Panos Papiotis; Nuno Saldanha; Mel Slater
We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participants movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.
Proceedings of the 3rd International Symposium on Movement and Computing | 2016
Carles Fernandes Julià; Panos Papiotis; Sebastián Mealla Cincuegrani; Sergi Jordà
Interaction designers often use machine learning tools to generate intuitive mappings between complex inputs and outputs. These tools are usually trained live, which is not always feasible or practical. We combine RepoVizz, an online repository and visualizer for multimodal data, with a suite of Interactive Machine Learning tools, to demonstrate a technical solution for prototyping multimodal interactions that decouples the data acquisition step from the model training step. This way, different input data set-ups can be easily replicated, shared and experimented upon their capability to control complex output without the need to repeat the technical set-up.
12th International Conference on Music Perception and Cognition | 2012
Panos Papiotis; Marco Marchini; Esteban Maestre
Archive | 2012
Marco Marchini; Panos Papiotis; Esteban Maestre
The 3rd International Conference on Music & Emotion, Jyväskylä, Finland, June 11-15, 2013 | 2013
Marco Marchini; Rafael Ramirez; Panos Papiotis; Esteban Maestre
Proc. of the 14th Int. Conference on Digital Audio Effects (DAFx-11) | 2011
Panos Papiotis; Esteban Maestre; Marco Marchini; Alfonso Pérez
Proceedings of the International Symposium on Performance Science 2013 | 2013
Panos Papiotis; Marco Marchini; Esteban Maestre
The 3rd International Conference on Music & Emotion, Jyväskylä, Finland, June 11-15, 2013 | 2013
Panos Papiotis; Perfecto Herrera; Marco Marchini