Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven R. Livingstone is active.

Publication


Featured researches published by Steven R. Livingstone.


Computer Music Journal | 2010

Changing musical emotion: A computational rule system for modifying score and performance

Steven R. Livingstone; Ralf Muhlberger; Andrew R. Brown; William Forde Thompson

When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.


Digital Creativity | 2007

Controlling musical emotionality: an affective computational architecture for influencing musical emotions

Steven R. Livingstone; Ralf Muhlberger; Andrew R. Brown; Andrew Loch

Abstract Emotions are a key part of creative endeavours, and a core problem for computational models of creativity. In this paper we discuss an affective computing architecture for the dynamic modification of music with a view to predictablyaffectinginducedmusicalemotions. Extending previous work on the modification of perceived emotions in music, our system architecture aims to provide reliable control of both perceived and induced musical emotions: its emotionality. A rule-based system is used to modify a subset of musical features at two processing levels, namely score and performance. The interactive model leverages sensed listener affect by adapting the emotionality of the music modifications in real-time to assist the listener in reaching a desired emotional state.


Psychonomic Bulletin & Review | 2010

Facial expressions of singers influence perceived pitch relations.

William Forde Thompson; Frank A. Russo; Steven R. Livingstone

In four experiments, we examined whether facial expressions used while singing carry musical information that can be “read” by viewers. In Experiment 1, participants saw silent video recordings of sung melodic intervals and judged the size of the interval they imagined the performers to be singing. Participants discriminated interval sizes on the basis of facial expression and discriminated large from small intervals when only head movements were visible. Experiments 2 and 3 confirmed that facial expressions influenced judgments even when the auditory signal was available. When matched with the facial expressions used to perform a large interval, audio recordings of sung intervals were judged as being larger than when matched with the facial expressions used to perform a small interval. The effect was not diminished when a secondary task was introduced, suggesting that audio-visual integration is not dependent on attention. Experiment 4 confirmed that the secondary task reduced participants’ ability to make judgments that require conscious attention. The results provide the first evidence that facial expressions influence perceived pitch relations.


international conference on acoustics, speech, and signal processing | 2014

Automatic detection of expressed emotion in Parkinson's Disease

Shunan Zhao; Frank Rudzicz; Leonardo G. Carvalho; Cesar Marquez-Chin; Steven R. Livingstone

Patients with Parkinsons Disease (PD) frequently exhibit deficits in the production of emotional speech. In this paper, we examine the classification of emotional speech in patients with PD and the classification of PD speech. Participants were recorded speaking short statements with different emotional prosody which were classified with three methods (naïve Bayes, random forests, and support vector machines) using 209 unique auditory features. Feature sets were reduced using simple statistical testing. We achieve accuracies of 65.5% and 73.33% on classifying between the emotions and between PD vs. control, respectively. These results may assist in the future development of automated early detection systems for diagnosing patients with PD.


Quarterly Journal of Experimental Psychology | 2015

Common cues to emotion in the dynamic facial expressions of speech and song

Steven R. Livingstone; William Forde Thompson; Marcelo M. Wanderley; Caroline Palmer

Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.


Frontiers in Psychology | 2014

The influence of vocal training and acting experience on measures of voice quality and emotional genuineness

Steven R. Livingstone; Deanna Choi; Frank A. Russo

Vocal training through singing and acting lessons is known to modify acoustic parameters of the voice. While the effects of singing training have been well documented, the role of acting experience on the singing voice remains unclear. In two experiments, we used linear mixed models to examine the relationships between the relative amounts of acting and singing experience on the acoustics and perception of the male singing voice. In Experiment 1, 12 male vocalists were recorded while singing with five different emotions, each with two intensities. Acoustic measures of pitch accuracy, jitter, and harmonics-to-noise ratio (HNR) were examined. Decreased pitch accuracy and increased jitter, indicative of a lower “voice quality,” were associated with more years of acting experience, while increased pitch accuracy was associated with more years of singing lessons. We hypothesized that the acoustic deviations exhibited by more experienced actors was an intentional technique to increase the genuineness or truthfulness of their emotional expressions. In Experiment 2, listeners rated vocalists’ emotional genuineness. Vocalists with more years of acting experience were rated as more genuine than vocalists with less acting experience. No relationship was reported for singing training. Increased genuineness was associated with decreased pitch accuracy, increased jitter, and a higher HNR. These effects may represent a shifting of priorities by male vocalists with acting experience to emphasize emotional genuineness over pitch accuracy or voice quality in their singing performances.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Body sway reflects leadership in joint music performance

Andrew Chang; Steven R. Livingstone; Dan J. Bosnyak; Laurel J. Trainor

Significance People perform tasks in coordination with others in daily life, but the mechanisms are not well understood. Using Granger causality models to examine string quartet dynamics, we demonstrated that musicians assigned as leaders affect other performers more than musicians assigned as followers. These effects were present during performance, when musicians could only hear each other, but were magnified when they could also see each other, indicating that both auditory and visual cues affect nonverbal social interactions. Furthermore, the overall degree of coupling between musicians was positively correlated with ratings of performance success. Thus, we have developed a method for measuring nonverbal interaction in complex situations and have shown that interaction dynamics are affected by social relations and perceptual cues. The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers’ ratings of the “goodness” of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.


Journal of the Acoustical Society of America | 2013

Acoustic differences in the speaking and singing voice

Steven R. Livingstone; Katlyn Peck; Frank A. Russo

Speech and song are universal forms of vocal expression that reflect distinct channels of communication. While these two forms of expression share a common means of sound production, differences in the acoustic properties of speech and song have not received significant attention. Here, we present evidence of acoustic differences in the speaking and singing voice. Twenty-four actors were recorded while speaking and singing different statements with five emotions, two emotional intensities, and two repetitions. Acoustic differences of speech and song were reported in several acoustic parameters, including vocal loudness, spectral properties, and vocal quality. Interestingly, emotion was conveyed similarly in many acoustic features across speech and song. These results demonstrate the entwined nature of speech and song, and provide evidence in support of the shared emergence speech and song as a form of early proto-language. These recordings form part of our new Ryerson Audio-Visual Database of Emotional Sp...


The First International Workshop on Advanced Context Modelling, Reasoning and Management | 2004

Towards a hybrid approach to context modeling, reasoning and interoperation

Karen Henricksen; Steven R. Livingstone; Jadwiga Indulska


australasian conference on interactive entertainment | 2005

Dynamic response: real-time adaptation for music emotion

Steven R. Livingstone; Andrew R. Brown

Collaboration


Dive into the Steven R. Livingstone's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emery Schubert

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cesar Marquez-Chin

Toronto Rehabilitation Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge