Satoru Fukayama
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Satoru Fukayama.
Journal of New Music Research | 2013
Stanislaw Andrzej Raczynski; Satoru Fukayama; Emmanuel Vincent
Abstract Most melody harmonization systems use the generative hidden Markov model (HMM), which model the relation between the hidden chords and the observed melody. Relations to other variables, such as the tonality or the metric structure, are handled by training multiple HMMs or are ignored. In this paper, we propose a discriminative means of combining multiple probabilistic models of various musical variables by means of model interpolation. We evaluate our models in terms of their cross-entropy and their performance in harmonization experiments. The proposed model offered higher chord root accuracy than the reference musicological rule-based harmonizer by up to 5% absolute.
Archive | 2013
Tae Hun Kim; Satoru Fukayama; Takuya Nishimoto; Shigeki Sagayama
In this chapter, we discuss how to render expressive polyphonic piano music through a statistical approach. Generating polyphonic expression is an important element in achieving automatic expressive piano performance since the piano is a polyphonic instrument. We will start by discussing the features of polyphonic piano expression and present a method for modeling it based on an approximation involving melodies and harmonies. An experimental evaluation indicates that performances generated with the proposed method achieved polyphonic expression and created an impression of expressiveness. In addition, performances generated with models trained on different performances were perceptually distinguishable by human listeners. Finally, we introduce an automatic expressive piano system called Polyhymnia that won the first place in the autonomous section of Performance Rendering Contest for Computer Systems (RenCon) 2010.
Journal of the Acoustical Society of America | 2016
Satoru Fukayama; Masataka Goto
We present a method for music emotion recognition which adaptively aggregates regression models. Music emotion recognition is a task to estimate how music affects the emotion of a listener. The approach works by mapping acoustic features into space that represents emotions. Previous research has centered on finding effective acoustic features, or applying a multi-stage regression to aggregate the results obtained by using different acoustic features. However, after training regression models, the aggregation happens in a fixed way and cannot be adapted to acoustic signals with different musical properties. We indicate that the most effective feature in estimation differs depending on what kind of emotion we are estimating, and propose a method that adapts the importance of each feature in estimating the emotion. We exploit the variance obtained with Gaussian process regression to measure the confidence of the estimated result from each regression model. The formula for aggregating results from different r...
7th Sound and Music Computing Conference, SMC 2010 | 2010
Satoru Fukayama; Kei Nakatsuma; Shinji Sako; Takuya Nishimoto; Shigeki Sagayama
international conference on entertainment computing | 2009
Satoru Fukayama; Kei Nakatsuma; Shinji Sako; Tae Hun Kim; Si Wei Qin; Takuho Nakano; Takuya Nishimoto; Shigeki Sagayama
international symposium/conference on music information retrieval | 2015
Graham Percival; Satoru Fukayama; Masataka Goto
international computer music conference | 2012
Satoru Fukayama; Daisuke Saito; Shigeki Sagayama
new interfaces for musical expression | 2011
Tae Hun Kim; Satoru Fukayama; Takuya Nishimoto; Shigeki Sagayama
international symposium/conference on music information retrieval | 2017
Shunya Ariga; Satoru Fukayama; Masataka Goto
Archive | 2015
Jordan B. L. Smith; Graham Percival; Jun Kato; Masataka Goto; Satoru Fukayama
Collaboration
Dive into the Satoru Fukayama's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs