Lena Quinto
Macquarie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lena Quinto.
Cognition & Emotion | 2008
William Forde Thompson; Frank A. Russo; Lena Quinto
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.
Attention Perception & Psychophysics | 2010
Lena Quinto; William Forde Thompson; Frank A. Russo; Sandra E. Trehub
The importance of visual cues in speech perception is illustrated by the McGurk effect, whereby a speaker’s facial movements affect speech perception. The goal of the present study was to evaluate whether the McGurk effect is also observed for sung syllables. Participants heard and saw sung instances of the syllables /ba/ and /ga/ and then judged the syllable they perceived. Audio-visual stimuli were congruent or incongruent (e.g., auditory /ba/ presented with visual /ga/). The stimuli were presented as spoken, sung in an ascending and descending triad (C E G G E C), and sung in an ascending and descending triad that returned to a semitone above the tonic (C E G G E C#). Results revealed no differences in the proportion of fusion responses between spoken and sung conditions confirming that cross-modal phonemic information is integrated similarly in speech and song.
Frontiers in Psychology | 2013
Lena Quinto; William Forde Thompson; Felicity Louise Keating
Many acoustic features convey emotion similarly in speech and music. Researchers have established that acoustic features such as pitch height, tempo, and intensity carry important emotional information in both domains. In this investigation, we examined the emotional significance of melodic and rhythmic contrasts between successive syllables or tones in speech and music, referred to as Melodic Interval Variability (MIV) and the normalized Pairwise Variability Index (nPVI). The spoken stimuli were 96 tokens expressing the emotions of irritation, fear, happiness, sadness, tenderness, or no emotion. The music stimuli were 96 phrases, played with or without performance expression and composed with the intention of communicating the same emotions. Results showed that nPVI, but not MIV, operates similarly in music and speech. Spoken stimuli, but not musical stimuli, were characterized by changes in MIV as a function of intended emotion. The results suggest that these measures may signal emotional intentions differently in speech and music.
Psychology of Music | 2014
Lena Quinto; William Forde Thompson; Alan Taylor
In this investigation, eight highly-trained musicians communicated emotions through composition, performance expression, or the combination of the two. In the performance condition, they performed melodies with the intention of expressing six target emotions: anger, fear, happiness, neutral, sadness, and tenderness. In the composition condition, they composed melodies to express the same six emotions. The notated compositions were then played digitally without performance expression. In the combined condition, musicians performed the melodies they composed to convey the target emotions. Forty-two listeners heard the stimuli and attempted to decode the emotions in a forced-choice paradigm. Decoding accuracy varied significantly as a function of the channel of communication. Fear was comparatively well-decoded in the composition condition whereas anger was comparatively well decoded in the performance condition. Happiness and sadness were comparatively well-decoded in all three channels of communication. A principal component analysis of cues used by musicians clarified the distinct approaches adopted in composition and performance to differentiate emotional intentions. The results confirm that composition and performance involve the manipulation of distinct cues and have different emotional capabilities.
Frontiers in Psychology | 2014
Lena Quinto; William Forde Thompson; Christian Kroos; Caroline Palmer
Archive | 2011
William Forde Thompson; Lena Quinto
International Journal of Psychophysiology | 2016
Britta Biedermann; Peter de Lissa; Yatin Mahajan; Vince Polito; Nicolas A Badcock; Michael H. Connors; Lena Quinto; Linda Larsen; Genevieve McArthur
Psychomusicology: Music, Mind and Brain | 2013
Lena Quinto; William Forde Thompson
International Journal of Synthetic Emotions | 2012
Lena Quinto; William Forde Thompson
Archive | 2006
William Forde Thompson; Frank A. Russo; Lena Quinto