Mayumi Adachi
Hokkaido University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mayumi Adachi.
Journal of Experimental Psychology: General | 2002
E. Glenn Schellenberg; Mayumi Adachi; Kelly T. Purdy; Margaret C. McKinnon
Melodic expectancies among children and adults were examined. In Experiment 1, adults, 11-year-olds, and 8-year-olds rated how well individual test tones continued fragments of melodies. In Experiment 2, 11-, 8-, and 5-year-olds sang continuations to 2-tone stimuli. Response patterns were analyzed using 2 models of melodic expectancy. Despite having fewer predictor variables, the 2-factor model (E. G. Schellenberg, 1997) equaled or surpassed the implication-realization model (E. Narmour, 1990) in predictive accuracy. Listeners of all ages expected the next tone in a melody to be proximate in pitch to the tone heard most recently. Older listeners also expected reversals of pitch direction, specifically for tones that changed direction after a disruption of proximity and for tones that formed symmetric patterns.
Psychology of Music | 1998
Mayumi Adachi; Sandra E. Trehub
Children 4-12 years of age (N = 160) were recorded (audio and video) as they sang two versions of a familiar song, once in an attempt to make an adult listener happy and once to make her sad. Coding of gestural, vocal, linguistic and musical devices revealed that children used all of these means to portray contrastive emotions. Regardless of age or singing skill, children relied primarily on expressive devices used in interpersonal communication (e.g. tempo, facial expression) and made relatively little use of music-specific devices (e.g. legato). Moreover, they used a greater variety of expressive devices in their sad performances than in their happy performances. Finally, age-related changes reflected the influence of maturity, socialisation and musical knowledge.
Music Perception: An Interdisciplinary Journal | 2000
Mayumi Adachi; Sandra E. Trehub
Adults and children were exposed to separate visual and auditory cues from paired renditions of familiar songs by young, untrained singers who attempted to express happiness and sadness. Same-age children and adults decoded the expressive intentions of 8-to 10-year-old singers with comparable accuracy (Experiment 1). For performances by 6-to 7-year-old singers, same-age children were less accurate decoders than were adults (Experiment 2). The younger performers also provided poorer cues to the intended emotion than did the older performers. Moreover, 6-to 7-year-olds were less accurate than 8-to 10-year-olds at decoding the performances of 8-to 10-year-old singers (Experiment 3). The findings indicate that, although young children successfully produce and interpret happy and sad versions of familiar songs, 6-to 7-year-old children are less proficient than are 8-to 10-year-old children and adults. /// 音楽的訓練を受けていない普通の歌唱力をもった児童によって、「楽しい」 あるいは 「悲しい」 気分になるように歌い分けされた周知の歌を、成人お よび同年齢児童に 「視覚情報のみ」と 「穂覚情報のみ」 の二条件で提示した。 歌唱者が 8-10 才児の場合、歌唱者の意図した情動は、成人、同年齢児童と もに解説できた (実験1)。 歌唱者が 6-7 才児の場合、同年齢児童は成人 ほど歌唱者の意図した情動を解説することができなかった (実験2)。 また、 実験 1 と実験 2 における成人の解説率を比較したところ、6-7 才児の情動 表現は 8ー10 才児の表現よりも解説しにくいことが明らかになった。 さらに、 8-10 才児の歌唱表現を6-7才児が解説できるか調べたところ、6-7 才児は 8-10 才児ほどには解説できないことが分かつた (実験 3)。 これら の結果から、6-7 才児は、歌唱者として周知の歎を 「楽しい」 あるいは 「悲しい」 気分になるように歌い分けたり、視聴者としてそれらの対照的情 動表現を解説することはできるが、8-10 才児や成人ほどの解説能力は持 たないことが明らかになった。
Music Perception: An Interdisciplinary Journal | 2012
Haruka Shoda; Mayumi Adachi
we explored how a pianist manipulates his upper body according to his interpretation of music. We asked a professional pianist to perform artistic, deadpan, and exaggerated renditions of two structurally contrasting pieces. The pianist’s affective interpretations clearly differentiated among the three renditions. The artistic rendition, representing the true nature of the piece, was compared to the contrived deadpan and exaggerated renditions. The pianist’s range of body movement in the artistic rendition differed from the other two for a fast, energetic piece, whereas it only differed from the deadpan for a slow, romantic piece. The pianist highlighted the structural contrasts within the artistic rendition by manipulating his range of body movement and by coordinating the variations between body movement and temporal/dynamical projection of tones.
PLOS ONE | 2016
Haruka Shoda; Mayumi Adachi; Tomohiro Umeda
We investigated how the audience member’s physiological reactions differ as a function of listening context (i.e., live versus recorded music contexts). Thirty-seven audience members were assigned to one of seven pianists’ performances and listened to his/her live performances of six pieces (fast and slow pieces by Bach, Schumann, and Debussy). Approximately 10 weeks after the live performance, each of the audience members returned to the same room and listened to the recorded performances of the same pianists’ via speakers. We recorded the audience members’ electrocardiograms in listening to the performances in both conditions, and analyzed their heart rates and the spectral features of the heart-rate variability (i.e., HF/TF, LF/HF). Results showed that the audience’s heart rate was higher for the faster than the slower piece only in the live condition. As compared with the recorded condition, the audience’s sympathovagal balance (LF/HF) was less while their vagal nervous system (HF/TF) was activated more in the live condition, which appears to suggest that sharing the ongoing musical moments with the pianist reduces the audience’s physiological stress. The results are discussed in terms of the audience’s superior attention and temporal entrainment to live performance.
Japanese Psychological Research | 2004
Mayumi Adachi; Sandra E. Trehub; Jun-ichi Abe
Archive | 2012
Mayumi Adachi; Sandra E. Trehub
Psychomusicology: Music, Mind and Brain | 2011
Mayumi Adachi; Sandra E. Trehub
Archive | 2004
Mayumi Adachi; Yukari Chino
Archive | 2012
Mayumi Adachi