Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Makiko Sadakata is active.

Publication


Featured researches published by Makiko Sadakata.


Journal of Computer Assisted Learning | 2006

Development of real-time visual feedback assistance in singing training: a review

D. Hoppe; Makiko Sadakata; Peter Desain

Four real-time visual feedback computer tools for singing lessons (SINGAD, ALBERT, SING & SEE, and WinSINGAD), and the research carried out to evaluate the usefulness of these systems are reviewed in this article. We report on the development of user-functions and the usability of these computer-assisted learning tools. Both quantitative and qualitative studies confirm the efficiency of real-time visual feedback in improving singing abilities. Having addressed these findings, we suggest further quantitative investigations of (1) the detailed effect of visual feedback on performance accuracy and on the learning process, and (2) the interactions between improvement of musical performance and the type of visual feedback and the amount of information it presents, the skill level of the user and the teachers role.


NeuroImage | 2011

Name that tune: decoding music from the listening brain.

Rebecca Schaefer; Jason Farquhar; Yvonne Blokland; Makiko Sadakata; Peter Desain

In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.


Acta Psychologica | 2011

Enhanced perception of various linguistic features by musicians: A cross-linguistic study

Makiko Sadakata; Kaoru Sekiyama

Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing.


Journal of the Acoustical Society of America | 2013

High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates

Makiko Sadakata; James M. McQueen

This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.


Journal of New Music Research | 2008

Real-Time Visual Feedback for Learning to Perform Short Rhythms with Variations in Timing and Loudness

Makiko Sadakata; David Hoppe; Alex Brandmeyer; Renee Timmers; Peter Desain

Abstract According to learning theories and empirical observations, communication between teachers and students is a crucial factor in effective learning of musical expressions. One possibility for improving this communication could be the introduction of visual feedback (VFB) in the lesson. In the current study, a new type of real-time VFB is proposed, which represents changes in acoustical parameters (loudness and timing) as parameters of an abstract visual image (size and shape). We evaluated the effects of using VFB on imitations of timing and loudness deviations in simple rhythmic patterns. We also studied how learned skills transfer to the same task with new rhythms, as well as to new tasks. Twenty-four amateur musicians participated in the experiment that included both imitation and perception tasks. Results indicated that the VFB was helpful for improving to imitate loudness patterns, while it did not enhance improvement of learning timing patterns. Analysis of transfer of learning effects indicated that learned skills to imitate rhythms were transferred when tasks were similar: skills transferred to the same task (to imitate new rhythm) but not to new task (perception).


Frontiers in Psychology | 2014

Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training

Makiko Sadakata; James M. McQueen

Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a 5-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by three speakers), and high (similar to medium but with five speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired.


Psychological Research-psychologische Forschung | 2011

Learning expressive percussion performance under different visual feedback conditions

Alex Brandmeyer; Renee Timmers; Makiko Sadakata; Peter Desain

A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.


Psychology of Music | 2017

Attention to affective audio-visual information: Comparison between musicians and non-musicians:

Janne Weijkamp; Makiko Sadakata

Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training (musician group) and 16 participants with less than 2 years of musical training (non-musician group) took part in a version of the audio-visual emotional Stroop test, using happy, neutral, and sad emotions. Participants were presented with congruent and incongruent combinations of face and voice stimuli while judging the emotion of either the face or the voice. As predicted, musicians were less susceptible to interference from visual information on auditory emotion judgments than non-musicians, as evidenced by musicians being more accurate when judging auditory emotions when presented with congruent and incongruent visual information. Musicians were also more accurate than non-musicians at identifying visual emotions when presented with concurrent auditory information. Thus, musicians were less influenced by congruent/incongruent information in a non-target modality compared to non-musicians. The results suggest that musical training influences audio-visual information processing.


Frontiers in Human Neuroscience | 2016

The Enhanced Musical Rhythmic Perception in Second Language Learners

M. Paula Roncaglia-Denissen; Drikus A. Roor; Ao Chen; Makiko Sadakata

Previous research suggests that mastering languages with distinct rather than similar rhythmic properties enhances musical rhythmic perception. This study investigates whether learning a second language (L2) contributes to enhanced musical rhythmic perception in general, regardless of first and second languages rhythmic properties. Additionally, we investigated whether this perceptual enhancement could be alternatively explained by exposure to musical rhythmic complexity, such as the use of compound meter in Turkish music. Finally, it investigates if an enhancement of musical rhythmic perception could be observed among L2 learners whose first language relies heavily on pitch information, as is the case with tonal languages. Therefore, we tested Turkish, Dutch and Mandarin L2 learners of English and Turkish monolinguals on their musical rhythmic perception. Participants’ phonological and working memory capacities, melodic aptitude, years of formal musical training and daily exposure to music were assessed to account for cultural and individual differences which could impact their rhythmic ability. Our results suggest that mastering a L2 rather than exposure to musical rhythmic complexity could explain individuals’ enhanced musical rhythmic perception. An even stronger enhancement of musical rhythmic perception was observed for L2 learners whose first and second languages differ regarding their rhythmic properties, as enhanced performance of Turkish in comparison with Dutch L2 learners of English seem to suggest. Such a stronger enhancement of rhythmic perception seems to be found even among L2 learners whose first language relies heavily on pitch information, as the performance of Mandarin L2 learners of English indicates. Our findings provide further support for a cognitive transfer between the language and music domain.


Frontiers in Neuroscience | 2013

Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback

Alex Brandmeyer; Makiko Sadakata; Loukianos Spyrou; James M. McQueen; Peter Desain

Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.

Collaboration


Dive into the Makiko Sadakata's collaboration.

Top Co-Authors

Avatar

Peter Desain

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Brandmeyer

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Hoppe

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Farquhar

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge