Jeffrey J. Scott
Drexel University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey J. Scott.
multimedia signal processing | 2009
Travis M. Doll; Raymond Migneco; Jeffrey J. Scott; Youngmoo E. Kim
The Adobe Flash platform has become the de facto standard for developing and deploying media rich web applications and games. The relative ease-of-development and cross-platform architecture of Flash enables designers to rapidly prototype graphically rich interactive applications, but comprehensive support for audio and signal processing has been lacking. ActionScript, the primary development language used for Flash, is poorly suited for DSP algorithms. To address the inherent challenges in the integration of interactive audio processing into Flash-based applications, we have developed the DSP Audio Toolkit for Flash, which offers significant performance improvements over algorithms implemented in Java or ActionScript. By developing this toolkit, we hope to open up new possibilities for Flash applications and games, enabling them to utilize real-time audio processing as a means to drive gameplay and improve the experience of the end user.
2009 International IEEE Consumer Electronics Society's Games Innovations Conference | 2009
Raymond Migneco; Travis M. Doll; Jeffrey J. Scott; Christian M. Hahn; Paul J. Diefenbach; Youngmoo E. Kim
In recent years, there has been sharp rise in the number of games on web-based platforms, which are ideal for rapid game development and easy deployment. In a parallel but unrelated trend, music-centric video games that incorporate well-known popular music directly into the gameplay (e.g., Guitar Hero and Rock Band) have attained widespread popularity on console platforms. The limitations of such web-based platforms as Adobe Flash, however, have made it difficult for developers to utilize complex sound and music interaction within web games. Furthermore, the real-time audio processing and synchronization required in music-centric games demands significant computational power and specialized audio algorithms, which have been difficult or impossible to implement using Flash scripting. Taking advantage of features recently added to the platform, including dynamic audio control and C-compilation for near-native performance, we have developed the Audio processing Library for Flash (ALF), providing developers with a library of common audio processing routines and affording web games with a degree of sound interaction previously available only on console or native PC platforms. We also present several audio-intensive games that incorporate ALF to demonstrate its utility. One example performs real-time analysis of songs in a users music library to drive the gameplay, providing a novel form of game-music interaction.
acm multimedia | 2014
Matthew Prockup; Jeffrey J. Scott; Youngmoo E. Kim
When listening to music, humans often focus on melodic and rhythmic elements to identify specific songs or genres. While these representations may be quite simple, they still capture and differentiate higher level aspects of music such as expressive intent and musical style. In this work we seek to extract and represent rhythmic patterns from a polyphonic corpus of audio encompassing a number of styles. A compact feature is designed that probabilistically models rhythmic activations within musical beat divisions through histograms of Inter-Onset-Intervals (IOI). Onset detection functions are calculated from multiple frequency bands of a perceptually motivated filter bank. This allows for patterns of lower pitched and higher pitched onsets to be described separately. Through a set of supervised and unsupervised experiments, we show that this feature is well suited for a variety of tasks in which quantifying rhythmic style is necessary.
workshop on applications of signal processing to audio and acoustics | 2011
Erik M. Schmidt; Raymond Migneco; Jeffrey J. Scott; Youngmoo E. Kim
In this work we introduce the concept of modeling musical instrument tones as dynamic textures. Dynamic textures are multidimensional signals, which exhibit certain temporal-stationary characteristics such that they can be modeled as observations from a linear dynamical system (LDS). Previous work in dynamic textures research has shown that sequences exhibiting such characteristics can in many cases be re-synthesized by an LDS with high accuracy. In this work we demonstrate that short-time Fourier transform (STFT) coefficients of certain instrument tones (e.g. piano, guitar) can be well-modeled under this requirement. We show that these instruments can be re-synthesized using an LDS model with high fidelity, even using low-dimensional models. In looking to ultimately develop models which can be altered to provide control of pitch and articulation, we analyze the connections between such musical qualities as articulation with linear dynamical system model parameters. Finally, we provide preliminary experiments in the alteration of such musical qualities through model re-parameterization.
acm multimedia | 2014
Jeffrey J. Scott
Access to hardware and software tools for producing music has become commonplace in the digital landscape. While the means to produce music have become widely available, significant time must be invested to attain professional results. Mixing multi-channel audio requires techniques and training far beyond the knowledge of the average music software user. Achieving balance and clarity in a mixture comprising a multitude of instrument layers requires experience in evaluating and modifying the individual elements and their sum. Creating a mix involves many technical concerns (level balancing, dynamic range control, stereo panning, spectral balance) as well as artistic decisions (modulation effects, distortion effects, side-chaining, etc.). This work proposes methods to model the relationships between a set of multi-channel audio tracks based on short-time spectral-temporal characteristics and long term dynamics. The goal is to create a parameterized space based on high level perceptual cues to drive processing decisions in a multi-track audio setting.
computer music modeling and retrieval | 2012
Erik M. Schmidt; Matthew Prockup; Jeffrey J. Scott; Brian Dolhansky; Brandon G. Morton; Youngmoo E. Kim
While the organization of music in terms of emotional affect is a natural process for humans, quantifying it empirically proves to be a very difficult task. Consequently, no acoustic feature or combination thereof has emerged as the optimal representation for musical emotion recognition. Due to the subjective nature of emotion, determining whether an acoustic feature domain is informative requires evaluation by human subjects. In this work, we seek to perceptually evaluate two of the most commonly used features in music information retrieval: mel-frequency cepstral coefficients and chroma. Furthermore, to identify emotion-informative feature domains, we explore which musical features are most relevant in determining emotion perceptually, and which acoustic feature domains are most variant or invariant to those changes. Finally, given our collected perceptual data, we conduct an extensive computational experiment for emotion prediction accuracy on a large number of acoustic feature domains, investigating pairwise prediction both in the context of a general corpus as well as in the context of a corpus that is constrained to contain only specific musical feature transformations.
Archive | 2010
Youngmoo E. Kim; Erik M. Schmidt; Raymond Migneco; Brandon G. Morton; Patrick Richardson; Jeffrey J. Scott; Jacquelin A. Speck; Douglas Turnbull
international symposium/conference on music information retrieval | 2012
Erik M. Schmidt; Jeffrey J. Scott; Youngmoo E. Kim
international symposium/conference on music information retrieval | 2010
Youngmoo E. Kim; Erik M. Schmidt; Raymond Migneco; Brandon G. Morton; Patrick Richardson; Jeffrey J. Scott; Jacquelin A. Speck; Douglas Turnbull
international symposium/conference on music information retrieval | 2011
Jeffrey J. Scott; Youngmoo E. Kim