Graham Percival
University of Victoria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Graham Percival.
IEEE Transactions on Audio, Speech, and Language Processing | 2014
Graham Percival; George Tzanetakis
Algorithms for musical tempo estimation have become increasingly complicated in recent years. These algorithms typically utilize two fundamental properties of musical rhythm: some features of the audio signal are self-similar at periods related to the underlying rhythmic structure, and rhythmic events tend to be spaced regularly in time. We present a streamlined tempo estimation method ( stem) that distills ideas from previous work by reducing the number of steps, parameters, and modeling assumptions while retaining good accuracy. This method is designed for music with a constant or near-constant tempo. The proposed method either outperforms or has similar performance to many existing state-of-the-art algorithms. Self-similarity is captured through autocorrelation of the onset strength signal (OSS), and time regularity is captured through cross-correlation of the OSS with regularly spaced pulses. Our findings are supported by the most comprehensive evaluation of tempo estimation algorithms to date in terms of the number of datasets and tracks considered. During the process we have also corrected ground truth annotations for the datasets considered. All the data, the annotations, the evaluation code, and three different implementations (C++, Python, MATLAB) of the proposed algorithm are provided in order to support reproducibility.
international conference on acoustics, speech, and signal processing | 2013
George Tzanetakis; Graham Percival
Tempo estimation is a fundamental problem in music information retrieval. It also forms the basis of other types of rhythmic analysis such as beat tracking and pattern detection. There is a large body of work in tempo estimation using a variety of different approaches that differ in their accuracy as well as their complexity. Fundamentally they take advantage of two properties of musical rhythm: 1) the music signal tends to be self-similar at periodicities related to the underlying rhythmic structure, 2) rhythmic events tend to be spaced regularly in time. We propose an algorithm for tempo estimation that is based on these two properties. We have tried to reduce the number of steps, parameters and modeling assumptions while retaining good performance and causality. The proposed approach outperforms a large number of existing tempo estimation methods and has similar performance to the best-performing ones. We believe that we have conducted the most comprehensive evaluation to date of tempo induction algorithms in terms of number of datasets and tracks.
acm multimedia | 2009
Yinsheng Zhou; Zhonghua Li; Dillion Tan; Graham Percival; Ye Wang
The computational power and sensory capabilities of mobile devices are increasing dramatically these days, rendering them suitable for real-time sound synthesis and various musical expressions. In this paper, we demonstrate a novel mobile music making system which leverages the ubiquity, ultra-mobility, and multi-modality of mobile devices (iPod touch) for people to create and compose music collaboratively. Unlike the conventional music making applications which generate the music on a single mobile device with a preset sound and interface, our system allows several players in a group to be connected together through wireless LAN network, creating music with different sounds and interfaces. Finally, the performance can be recorded as a single music file and played back in the future. The paper also shows some application scenarios for this collaborative music making system in future research.
Journal of New Music Research | 2017
Jordan B. L. Smith; Jun Kato; Satoru Fukayama; Graham Percival; Masataka Goto
There is considerable interest in music-based games and apps. However, in existing games, music generally serves as an accompaniment or as a reward for progress. We set out to design a game where paying attention to the music would be essential to making deductions and solving the puzzle. The result is the CrossSong Puzzle, a novel type of music-based logic puzzle that integrates musical and logical reasoning. The game presents a player with a grid of tiles, each representing a mash-up of excerpts from two different songs. The goal is to rearrange the tiles so that each row and column plays a continuous musical excerpt. To create puzzles, we implemented an algorithm to automatically identify a set of song fragments to fill a grid such that each tile contains an acceptable mash-up. We present several optimisations to speed up the search for high-quality grids. We also discuss the iterative design of the game’s interface and present the results of a user evaluation of the final design. Finally, we present some insights learned from the experience which we believe are important to developing music-based puzzle games that are entertaining, feasible and that challenge one’s ability to think about music.
acm multimedia | 2013
Graham Percival; Nick Bailey; George Tzanetakis
This work improves the realism of synthesis and performance of string quartet music by generating audio through physical modelling of the violins, viola, and cello. To perform music with the physical models, virtual musicians interpret the musical score and generate actions which control the physical models. The resulting audio and haptic signals are examined with support vector machines, which adjust the bowing parameters in order to establish and maintain a desirable timbre. This intelligent feedback control is trained with human input, but after the initial training is completed, the virtual musicians perform autonomously. The system can synthesize and control different instruments of the same type (e.g., multiple distinct violins) and has been tested on two distinct string quartets (total of 8 violins, 2 violas, 2 cellos). In addition to audio, the system creates a video animation of the instruments performing the sheet music.
acm multimedia | 2007
Graham Percival; Ye Wang; George Tzanetakis
human factors in computing systems | 2011
Yinsheng Zhou; Graham Percival; Xinxi Wang; Ye Wang; Shengdong Zhao
international computer music conference | 2007
Matthias Robine; Graham Percival; Mathieu Lagrange
acm multimedia | 2010
Yinsheng Zhou; Graham Percival; Xinxi Wang; Ye Wang; Shengdong Zhao
international symposium/conference on music information retrieval | 2007
Ajay Kapur; Graham Percival; Mathieu Lagrange; George Tzanetakis
Collaboration
Dive into the Graham Percival's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs