Amy K. Hoover
University of Central Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amy K. Hoover.
Connection Science | 2009
Amy K. Hoover; Kenneth O. Stanley
The ability of gifted composers like Mozart to create complex multipart musical compositions with relative ease suggests a highly efficient mechanism for generating multiple parts simultaneously. Computational models of human music composition can potentially shed light on how such rapid creativity is possible. This article proposes such a model based on the idea that the multiple threads of a song are temporal patterns that are functionally related, which means that one instruments sequence is a function of anothers. This idea is implemented in a program called NEAT Drummer that interactively evolves a type of artificial neural network called a compositional pattern-producing network, which represents the functional relationship between the instruments and drums. The main result is that richly textured drum tracks that tightly follow the structure of the original song are easily generated because of their functional relationship to it.
Evo'08 Proceedings of the 2008 conference on Applications of evolutionary computing | 2008
Amy K. Hoover; Michael Rosario; Kenneth O. Stanley
A major challenge in computer-generated music is to produce music that sounds natural. This paper introduces NEAT Drummer, which takes steps toward natural creativity. NEAT Drummer evolves a kind of artificial neural network called a Compositional Pattern Producing Network (CPPN) with the NeuroEvolution of Augmenting Topologies (NEAT) method to produce drum patterns. An important motivation for this work is that instrument tracks can be generated as a function of other song parts, which, if written by humans, thereby provide a scaffold for the remaining auto-generated parts. Thus, NEAT Drummer is initialized with inputs from an existing MIDI song and through interactive evolution allows the user to evolve increasingly appealing rhythms for that song. This paper explains how NEAT Drummer processes MIDI inputs and outputs drum patterns. The net effect is that a compelling drum track can be automatically generated and evolved for any song.
genetic and evolutionary computation conference | 2011
Amy K. Hoover; Paul A. Szerlip; Kenneth O. Stanley
While the real-time focus of todays automated accompaniment generators can benefit instrumentalists and vocalists in their practice, improvisation, or performance, an opportunity remains specifically to assist novice composers. This paper introduces a novel such approach based on evolutionary computation called functional scaffolding for musical composition (FSMC), which helps the user explore potential accompaniments for existing musical pieces, or scaffolds. The key idea is to produce accompaniment as a function of the scaffold, thereby inheriting from its inherent style and texture. To implement this idea, accompaniments are represented by a special type of neural network called a compositional pattern producing network (CPPN), which produces harmonies by elaborating on and exploiting regularities in pitches and rhythms found in the scaffold. This paper focuses on how inexperienced composers can personalize accompaniments by first choosing any MIDI scaffold, then selecting which parts (e.g. the piano, guitar, or bass guitar) the CPPN can hear, and finally customizing and refining the computer-generated accompaniment through an interactive process of selection and mutation of CPPNs called interactive evolutionary computation (IEC). The potential of this approach is demonstrated by following the evolution of a specific accompaniment and studying whether listeners appreciate the results.
Computer Music Journal | 2014
Amy K. Hoover; Paul A. Szerlip; Kenneth O. Stanley
Many tools for computer-assisted composition contain built-in music-theoretical assumptions that may constrain the output to particular styles. In contrast, this article presents a new musical representation that contains almost no built-in knowledge, but that allows even musically untrained users to generate polyphonic textures that are derived from the users own initial compositions. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and a generated set of one or more additional musical voices. A human user without any musical expertise can then explore how the generated voice (or voices) should relate to the scaffold through an interactive evolutionary process akin to animal breeding. By inheriting from the intrinsic style and texture of the piece provided by the user, this approach can generate additional voices for potentially any style of music without the need for extensive musical expertise.
ICCC | 2012
Amy K. Hoover; Paul A. Szerlip; Marie E. Norton; Trevor A. Brindle; Zachary Merritt; Kenneth O. Stanley
Archive | 2011
Amy K. Hoover; Paul A. Szerlip; Kenneth O. Stanley
Archive | 2008
Kenneth O. Stanley; Michael Rosario; Amy K. Hoover
Archive | 2012
Paul A. Szerlip; Amy K. Hoover; Kenneth O. Stanley
Archive | 2007
Amy K. Hoover; Kenneth O. Stanley
ICCC | 2013
Amy K. Hoover; Paul A. Szerlip; Kenneth O. Stanley