Matthew Prockup
Drexel University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Prockup.
IEEE MultiMedia | 2013
Matthew Prockup; David Grunberg; Alex Hrybyk; Youngmoo E. Kim
Many people enjoy the symphony, but those without prior training often find it difficult to relate to the music. The authors have developed a system that guides listeners through orchestral performances in real time by presenting time-relevant annotations in a manner similar to that of a personal museum guide. These annotations are authored in partnership with musical experts prior to a performance to provide appropriate contextual information for a given concert program. Using acoustic features of the music, they align the live performance with that of a previously time-stamped recording. The aligned position is transmitted to an application on the users handheld devices, which present the annotations using an intuitive and unobtrusive interface. To assess its utility, the system underwent a user beta testing stage accompanying orchestra concert broadcasts. It has since been adopted by the Philadelphia Orchestra for use during live concerts in its 2012-2013 subscription season and beyond.
international conference on digital signal processing | 2011
Youngmoo E. Kim; Alyssa M. Batula; Raymond Migneco; Patrick Richardson; Brian Dolhansky; David Grunberg; Brandon G. Morton; Matthew Prockup; Erik M. Schmidt; Jeffrey J. Scott
Music is an integral part of high school students daily lives, and most use digital music devices and services. The one-week Summer Music Technology (SMT) program at Drexel University introduces underclassmen high school students to music technology to reveal the influence and importance of engineering, science, and mathematics. By engaging participants affinity for music, we hope to motivate and catalyze curiosity in science and technology. The curriculum emphasizes signal processing concepts, tools, and methods through hands-on activities and individual projects and leverages computer-based learning and open-source software in most activities. Since the program began in 2006, SMT has enrolled nearly 100 high school students and further developed the communication and teaching skills of nearly 20 graduate and undergraduate engineering students serving as core instructors. The program also serves to attract students from backgrounds under-represented in engineering, math, and science who may not have considered these fields.
workshop on applications of signal processing to audio and acoustics | 2015
Matthew Prockup; Andreas F. Ehmann; Fabien Gouyon; Erik M. Schmidt; Youngmoo E. Kim
Musical meter and attributes of the rhythmic feel such as swing, syncopation, and danceability are crucial when defining musical style. However, they have attracted relatively little attention from the Music Information Retrieval (MIR) community and, when addressed, have proven difficult to model from music audio signals. In this paper, we propose a number of audio features for modeling meter and rhythmic feel. These features are first evaluated and compared to timbral features in the common task of ballroom genre classification. These features are then used to learn individual models for a total of nine rhythmic attributes covering meter and feel using an industrial-sized corpus of over one million examples labeled by experts from Pandora® Internet Radios Music Genome Project®. Linear models are shown to be powerful, representing these attributes with high accuracy at scale.
acm multimedia | 2014
Matthew Prockup; Jeffrey J. Scott; Youngmoo E. Kim
When listening to music, humans often focus on melodic and rhythmic elements to identify specific songs or genres. While these representations may be quite simple, they still capture and differentiate higher level aspects of music such as expressive intent and musical style. In this work we seek to extract and represent rhythmic patterns from a polyphonic corpus of audio encompassing a number of styles. A compact feature is designed that probabilistically models rhythmic activations within musical beat divisions through histograms of Inter-Onset-Intervals (IOI). Onset detection functions are calculated from multiple frequency bands of a perceptually motivated filter bank. This allows for patterns of lower pitched and higher pitched onsets to be described separately. Through a set of supervised and unsupervised experiments, we show that this feature is well suited for a variety of tasks in which quantifying rhythmic style is necessary.
web search and data mining | 2018
Samaneh Ebrahimi; Hossein Vahabi; Matthew Prockup; Oriol Nieto
Online audio advertising is a particular form of advertising used abundantly in online music streaming services. In these platforms, which tend to host tens of thousands of unique audio advertisements (ads), providing high quality ads ensures a better user experience and results in longer user engagement. Therefore, the automatic assessment of these ads is an important step toward audio ads ranking and better audio ads creation. In this paper we propose one way to measure the quality of the audio ads using a proxy metric called Long Click Rate (LCR), which is defined by the amount of time a user engages with the follow-up display ad (that is shown while the audio ad is playing) divided by the impressions. We later focus on predicting the audio ad quality using only acoustic features such as harmony, rhythm, and timbre of the audio, extracted from the raw waveform. We discuss how the characteristics of the sound can be connected to concepts such as the clarity of the audio ad message, its trustworthiness, etc. Finally, we propose a new deep learning model for audio ad quality prediction, which outperforms the other discussed models trained on hand-crafted features. To the best of our knowledge, this is the first large-scale audio ad quality prediction study.
computer music modeling and retrieval | 2012
Erik M. Schmidt; Matthew Prockup; Jeffrey J. Scott; Brian Dolhansky; Brandon G. Morton; Youngmoo E. Kim
While the organization of music in terms of emotional affect is a natural process for humans, quantifying it empirically proves to be a very difficult task. Consequently, no acoustic feature or combination thereof has emerged as the optimal representation for musical emotion recognition. Due to the subjective nature of emotion, determining whether an acoustic feature domain is informative requires evaluation by human subjects. In this work, we seek to perceptually evaluate two of the most commonly used features in music information retrieval: mel-frequency cepstral coefficients and chroma. Furthermore, to identify emotion-informative feature domains, we explore which musical features are most relevant in determining emotion perceptually, and which acoustic feature domains are most variant or invariant to those changes. Finally, given our collected perceptual data, we conduct an extensive computational experiment for emotion prediction accuracy on a large number of acoustic feature domains, investigating pairwise prediction both in the context of a general corpus as well as in the context of a corpus that is constrained to contain only specific musical feature transformations.
international symposium/conference on music information retrieval | 2015
Matthew Prockup; Andreas F. Ehmann; Fabien Gouyon; Erik M. Schmidt; Òscar Celma; Youngmoo E. Kim
Archive | 2012
Erik M. Schmidt; Matthew Prockup; Jeffery Scott; Brian Dolhansky; Brandon G. Morton; Youngmoo E. Kim
international symposium/conference on music information retrieval | 2013
Matthew Prockup; Erik M. Schmidt; Jeffrey J. Scott; Youngmoo E. Kim
Archive | 2012
Erik M. Schmidt; Matthew Prockup; Brandon G. Morton; Youngmoo E. Kim