Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank A. Russo is active.

Publication


Featured researches published by Frank A. Russo.


Semiotica | 2005

Seeing music performance: Visual influences on perception and experience

William Forde Thompson; Phil Graham; Frank A. Russo

Abstract Drawing from ethnographic, empirical, and historical / cultural perspectives, we examine the extent to which visual aspects of music contribute to the communication that takes place between performers and their listeners. First, we introduce a framework for understanding how media and genres shape aural and visual experiences of music. Second, we present case studies of two performances, and describe the relation between visual and aural aspects of performance. Third, we report empirical evidence that visual aspects of performance reliably influence perceptions of musical structure (pitch related features) and affective interpretations of music. Finally, we trace new and old media trajectories of aural and visual dimensions of music, and highlight how our conceptions, perceptions and appreciation of music are intertwined with technological innovation and media deployment strategies.


Psychological Science | 2007

Facing the Music

William Forde Thompson; Frank A. Russo

From the phonograph of the 19th century to the iPod today, music technologies have typically isolated the auditory dimension of music, filtering out nonacoustic information and transmitting what most people assume is the essence of music. Yet many esteemed performers over the past century, such as Judy Garland and B.B. King, are renowned for their dramatic use of facial expressions (Thompson, Graham, & Russo, 2005). Are such expressions merely show business, or are they integral to experiencing music? In the investigation reported here, we considered whether the facial expressions and head movements of singers communicate melodic information that can be ‘‘read’’ by viewers. Three trained vocalists were recorded singing ascending melodic intervals. Subjects saw the visual recordings (without sound) and rated the size of the intervals they imagined the performers were singing.


Cognition & Emotion | 2008

Audio-visual integration of emotional cues in song

William Forde Thompson; Frank A. Russo; Lena Quinto

We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.


Trends in Amplification | 2004

Hearing Aids and Music

Marshall Chasin; Frank A. Russo

Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.


Journal of the Acoustical Society of America | 2007

Urgency is a non-monotonic function of pulse rate.

Frank A. Russo; Jeffery A. Jones

Magnitude estimation was used to assess the experience of urgency in pulse-train stimuli (pulsed white noise) ranging from 3.13 to 200 Hz. At low pulse rates, pulses were easily resolved. At high pulse rates, pulses fused together leading to a tonal sensation with a clear pitch level. Urgency ratings followed a nonmonotonic (polynomial) function with local maxima at 17.68 and 200 Hz. The same stimuli were also used in response time and pitch scaling experiments. Response times were negatively correlated with urgency ratings. Pitch scaling results indicated that urgency of pulse trains is mediated by the perceptual constructs of speed and pitch.


Attention Perception & Psychophysics | 2005

An Interval Size Illusion: The Influence of Timbre on the Perceived Size of Melodic Intervals

Frank A. Russo; William Forde Thompson

In four experiments, we investigated the influence of timbre on perceived interval size. In Experiment 1, musically untrained participants heard two successive tones and rated the pitch distance between them. Tones were separated by six or seven semitones and varied in timbre. Pitch changes were accompanied by a congruent timbre change (e.g., ascending interval involving a shift from a dull to a bright timbre), an incongruent timbre change (e.g., ascending interval involving a shift from a bright to a dull timbre), or no timbre change. Ratings of interval size were strongly influenced by timbre. The six-semitone interval with a congruent timbre change was perceived to be larger than the seven-semitone interval with an incongruent timbre change (interval illusion). Experiment 2 revealed similar effects for musically trained participants. In Experiment 3, participants compared the size of two intervals presented one after the other. Effects of timbre were again observed, including evidence of an interval illusion. Experiment 4 confirmed that timbre manipulations did not distort the perceived pitch of tones. Changes in timbre can expand or contract the perceived size of intervals without distorting individual pitches. We discuss processes underlying interval size perception and their relation to pitch perception mechanisms.


IEEE Transactions on Haptics | 2009

Designing the Model Human Cochlea: An Ambient Crossmodal Audio-Tactile Display

Maria Karam; Frank A. Russo; Deborah I. Fels

We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.


Psychonomic Bulletin & Review | 2005

The subjective size of melodic intervals over a two-octave range

Frank A. Russo; William Forde Thompson

Musically trained and untrained participants provided magnitude estimates of the size of melodic intervals. Each interval was formed by a sequence of two pitches that differed by between 50 cents (one half of a semitone) and 2,400 cents (two octaves) and was presented in a high or a low pitch register and in an ascending or a descending direction. Estimates were larger for intervals in the high pitch register than for those in the low pitch register and for descending intervals than for ascending intervals. Ascending intervals were perceived as larger than descending intervals when presented in a high pitch register, but descending intervals were perceived as larger than ascending intervals when presented in a low pitch register. For intervals up to an octave in size, differentiation of intervals was greater for trained listeners than for untrained listeners. We discuss the implications for psychophysical pitch scales and models of music perception.


Music and Medicine | 2010

Music Hath Charms: The Effects of Valence and Arousal on Recovery Following an Acute Stressor

Gillian M. Sandstrom; Frank A. Russo

The purpose of this study was to investigate the effects of the valence and arousal dimensions of music over the time course of physiological (skin conductance level and heart rate) and subjective (Subjective Unit of Discomfort score) recovery from an acute stressor. Participants experienced stress after being told to prepare a speech, and were then exposed to happy, peaceful, sad, or agitated music. Music with a positive valence promoted both subjective and physiological recovery better than music with a negative valence, and low-arousal music was more effective than high-arousal music. Repeated measures analyses found that the emotion conveyed by the music affected skin conductance level recovery immediately following the stressor, whereas it affected heart rate recovery in a more sustained fashion. Follow-up tests found that positively valenced low-arousal (i.e., peaceful) music was more effective across the time course than an emotionally neutral control (white noise). Keywords arousal, emotion, music, stress, valence


Psychology of Music | 2013

Absorption in music: Development of a scale to identify individuals with strong emotional responses to music

Gillian M. Sandstrom; Frank A. Russo

Despite the rise in research investigating music and emotion over the last decade, there are no validated measures of individual differences in emotional responses to music. We created the Absorption in Music Scale (AIMS), a 34-item measure of individuals’ ability and willingness to allow music to draw them into an emotional experience. It was evaluated with a sample of 166 participants, and exhibits good psychometric properties. The scale converges well with measures of similar constructs, and shows reliability over time. Importantly, in a test of criterion validity, emotional responses to music were correlated with the AIMS scale but not correlated with measures of empathy or music training.

Collaboration


Dive into the Frank A. Russo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge