Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary S. Kendall is active.

Publication


Featured researches published by Gary S. Kendall.


Organised Sound | 2014

The Feeling Blend: Feeling and emotion in electroacoustic art

Gary S. Kendall

Starting from the assumption that meaning in electroacoustic music is an outcome of the listener’s mental processes, it is the goal of this essay to explicate the mental processes whereby feeling and emotion contribute to meaning when listening to electroacoustic music. This essay begins with a broad consideration of feeling and emotion with an eye toward artistic experience, spanning from basic emotions to nuanced phenomenal qualities. It then introduces the concept of mental layers in support of the multi-levelled nature of meaning, especially in this case, meaning that is felt as well as comprehended. These two preliminary topics precede the introduction of the feeling blend, an extension of blend theory as presented by Fauconnier and Turner (2002). Core issues for blend theory, such as what constitutes a mental space and what triggers a blend, are reconsidered in the light of practical examples from the literature of electroacoustic music. In conclusion, the feeling blend is proposed as an essential concept to understanding artistic experience and an intrinsic aspect of being human.


Organised Sound | 2006

Juxtaposition and Non-motion: Varèse bridges early modernism to electroacoustic music

Gary S. Kendall

Edgard Vareses Poeme electronique can be viewed as a bridge between early twentieth-century modernism and electroacoustic music. This connection to early modernism is most clearly seen in its use of musical juxtaposition, a favoured technique of early modernist composers, especially those active in Paris. Juxtaposition and non-motion are considered here, particularly in relationship to Smalleys exposition of spectromorphology (Smalley 1986), which in its preoccupation with motion omits any significant consideration of non-motion. Juxtaposition and non-motion have an important history within twentieth-century music, and as an early classic of electroacoustic music, Poeme electronique is a particularly striking example of a composition that is rich in juxtapositions similar to those found in passages of early modernist music. Examining Poeme electronique through the lens of juxtaposition and non-motion reveals how the organisation of its juxtaposed sounds encourages the experience of sound structure suspended time.


Computer Music Journal | 1981

Composing from a Geometric Model: Five-Leaf Rose

Gary S. Kendall

When one considers the vast range of possible applications of mathematics to music, composing from a geometric model would appear to be one of the simpler things to do. Certainly the geometry I have used in composing Five-Leaf Rose is conceptually simpler than most applications of set theory or stochastic processes. The purpose of using any such method, of course, is to aid structural integration and unity. What gives this geometric approach useful musical properties is the depth to which it can be applied in a computer-generated composition. The unity is not achieved merely by the idea of a geometric figure, but by having a coordinated pattern of control on many different aspects of the composition. Nearly everything in Five-Leaf Rose, from the formal structure to the acoustic details, is tied to this single model. The one exception, inelody, is freely composed against the background of these highly organized elements. The geometric figure as such may not be discernible to the audience, but the musical relationships derived from the fig-


computer music modeling and retrieval | 2008

The Artistic Play of Spatial Organization: Spatial Attributes, Scene Analysis and Auditory Spatial Schemata

Gary S. Kendall; Mauricio Ardila

Electroacoustic music lacks a definitive vocabulary for describing its spatiality. Not only does it lack a vocabulary for describing the spatial attributes of individual sound sources, it lacks a vocabulary for describing how these attributes participate in artistic expression. Following work by Rumsey, the definition of spatial attributes is examined in the broader context of auditory scene analysis. A limited number of spatial attributes are found to be adequate to characterize the individual levels of organization nested within the auditory scene. These levels are then viewed in relationship to auditory spatial schemata, the recurrent patterns by which listeners understand the behavior of sound in space. In electroacoustic music the interrelationship of spatial attributes and spatial schemata is often engaged in a play of perceptual grouping that blurs and confounds distinctions like source and ensemble. Our ability to describe and categorize these complex interactions depends on having clear concepts and terminology.


Computer Music Journal | 2014

Sound synthesis with auditory distortion products

Gary S. Kendall; Christopher Haworth; Rodrigo F. Cádiz

This article describes methods of sound synthesis based on auditory distortion products, often called combination tones. In 1856, Helmholtz was the first to identify sum and difference tones as products of auditory distortion. Today this phenomenon is well studied in the context of otoacoustic emissions, and the “distortion” is understood as a product of what is termed the cochlear amplifier. These tones have had a rich history in the music of improvisers and drone artists. Until now, the use of distortion tones in technological music has largely been rudimentary and dependent on very high amplitudes in order for the distortion products to be heard by audiences. Discussed here are synthesis methods to render these tones more easily audible and lend them the dynamic properties of traditional acoustic sound, thus making auditory distortion a practical domain for sound synthesis. An adaptation of single-sideband synthesis is particularly effective for capturing the dynamic properties of audio inputs in real time. Also presented is an analytic solution for matching up to four harmonics of a target spectrum. Most interestingly, the spatial imagery produced by these techniques is very distinctive, and over loudspeakers the normal assumptions of spatial hearing do not apply. Audio examples are provided that illustrate the discussion.


Computer Music Journal | 1986

A Modular Environment for Sound Synthesis and Composition

Shawn L. Decker; Gary S. Kendall; B.L. Schmidt; M.D. Ludwig; Daniel J. Freed

As our knowledge of sound synthesis grows, it becomes increasingly apparent that no single synthesis strategy can create the wide range of musical timbres desired by composers. Similarly, as we gain experience creating compositional interfaces to synthesis programs, it becomes clear that no single musical input language or user interface can adequately accommodate a wide range of compositional styles and intentions. Thus, the attempt to satisfy these musical demands with a single general-purpose synthesis language fails not only because such programs cannot meet the increasing needs of todays composer, but because they sacrifice efficiency and power for breadth and generality. The need for new strategies becomes obvious when one considers that notions about synthesis and compositional interfaces keep changing year by year and that a great deal of software is constantly discarded.


Journal of the Acoustical Society of America | 1989

A spatial sound processor for headphone and loudspeaker reproduction

William L. Martens; Gary S. Kendall; Martin D. Wilde

A spatial sound processor for stereo headphone and loudspeaker reproduction is described that can position sound elements within a three‐dimensional reverberant space surrounding the listener. Spatial motion of sound sources in three dimensions is created by dynamic filtering based on head‐related transfer functions. Additional filters and delay lines capture air absorption and Doppler shifting as the propagation time is manipulated for both direct and indirect sound. The spatiotemporal distribution of early reflections is captured for a given source/listener orientation: The gain, delay, and directional filtering of simulated reflections are responsive to changes in the specified position and orientation of the sound source and the listeners head in the simulated environment. The spatial processor can be used for headphone reproduction using a head‐tracking device, and can also be used in more typical reproduction settings such as living rooms with stereo loudspeakers. In the latter case, additional processing is employed to stabilize the stereo image and produce a spatially diffuse reverberant surround effect over a wide range of listening positions.A spatial sound processor for stereo headphone and loudspeaker reproduction is described that can position sound elements within a three‐dimensional reverberant space surrounding the listener. Spatial motion of sound sources in three dimensions is created by dynamic filtering based on head‐related transfer functions. Additional filters and delay lines capture air absorption and Doppler shifting as the propagation time is manipulated for both direct and indirect sound. The spatiotemporal distribution of early reflections is captured for a given source/listener orientation: The gain, delay, and directional filtering of simulated reflections are responsive to changes in the specified position and orientation of the sound source and the listeners head in the simulated environment. The spatial processor can be used for headphone reproduction using a head‐tracking device, and can also be used in more typical reproduction settings such as living rooms with stereo loudspeakers. In the latter case, additional pro...


Journal of the Acoustical Society of America | 2001

Musical considerations in the design of 3‐D‐sound rendering software

Gary S. Kendall; David A. Mann; Scott Robbin; Alan Kendall

The rapid growth of 3‐D‐sound technology has created a need for 3‐D audio tools which can be used by musicians and composers in much the same manner that visual artists use 3‐D graphics tools. Experience with such audio tools reveals some issues of special concern for music. For example, the rapid movement of musical sound sources can create Doppler shifts that produce harsh detunings of pitch, and the realistic rendering of intensity loss with changing distance can cause some musical elements to be buried in the mix. Consider too that individual instruments are best spatialized in different environments—high strings in large reverberant halls, electric basses in small dry rooms—and that some musically useful spatial effects, like stereo decorrelation, are not conveniently produced with environmental models. Spatial sound rendering software needs to provide numerous exceptions to accurate physical modeling in order to adapt to the musical context and it must support a heterogeneous collection of musically...


Journal of the Acoustical Society of America | 1985

Optimizing control rooms for stereo imagery

Douglas R. Jones; William L. Martens; Gary S. Kendall

Control rooms designers typically measure and specify rooms according to their physical structure and acoustic properties. They are unable, however, to measure or predict how well the room will support the subjective qualities of stereo imagery produced over loudspeakers. As the quality and salience of stereo imagery improve through the use of more sophisticated recording and processing techniques, control room requirements become more stringent. Beyond speaker placement, there are three primary factors that influence the perception of stereo images: time‐energy‐frequency characteristics of the speakers, spatio‐temporal distribution of early reflections, and the inclusion of acoustic diffraction. These are easily measured through the use of time delay spectrometry (TDS), but at present an adequate model for predicting subjective response from these physical measurements is lacking. Ensuring the perception of optimal stereo imagery requires the application of standardized subjective evaluation techniques. ...


Computer Music Journal | 1995

The Decorrelation of Audio Signals and Its Impact on Spatial Imagery

Gary S. Kendall

Collaboration


Dive into the Gary S. Kendall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Mann

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lonny L. Chu

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge