Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dave Headlam is active.

Publication


Featured researches published by Dave Headlam.


Music Theory Spectrum | 1997

The ♯IV(♭ V) Hypothesis: Testing the Limits of Schenker's Theory of Tonality

Matthew Brown; Douglas Dempster; Dave Headlam

This paper answers suspicions that Schenkerian theory is powerful only because it is somehow riven with circular assumptions and ad hoc inferences by discovering and testing boundaries to the applicability of the theory. Our


workshop on applications of signal processing to audio and acoustics | 2009

Polyphonic music transcription employing max-margin classification of spectrograhic features

Ren Gang; Mark F. Bocko; Dave Headlam; Justin Lundberg

IV/I V Hypothesis is based on Schenkerian theorys predictions that if IV/, V sonorities appear in a tonal context, then they are always indirectly related to the tonic; thus, whenever


international conference on digital signal processing | 2011

What makes music musical? a framework for extracting performance expression and emotion in musical sound

Ren Gang; Justin Lundberg; Gregory Bocko; Dave Headlam; Mark F. Bocko

IV/I V Stufen are interpreted as being directly related to the tonic or are not explainable in a convincing way as indirectly related, either the analysis is incorrect or the work must be deemed on-tonal. The direct analytical implications of the hypothesis provide us with a well-defined boundary to tonal practice under a Schenkerian interpretation. a number of pi ces containing apparent modulations to #IV I V. In other words, our knowledge of Schenkerian theory ormed our understa ding of the music at er than vice sa-it helped us to explain why we m ght interpret pases one way and not another. We b lieve t whatever gers there may be in making analysi subservient to the, there is an equal danger in neglec ing the formal propies of music theories. The moral is at theory nd analysis st always be in a cons ant dialogu : o e cannot grow propl withou the other. This content downloaded from 207.46.13.48 on Wed, 12 Oct 2016 05:45:53 UTC All use subject to http://about.jstor.org/terms


international conference on consumer electronics | 2011

Using real-time adaptive noise masking to mitigate ambient interferences

Ren Gang; Gregory Bocko; Justin Lundberg; Mark F. Bocko; Dave Headlam

In this paper we present a transcription method for polyphonic music. The short time Fourier transform is used first to decompose an acoustic signal into sonic partials in a time-frequency representation. In general the segmented partials exhibit distinguishable features if they originate from different “voices” in the polyphonic mix. We define feature vectors and utilize a max-margin classification algorithm to produce classification labels to serve as grouping cues, i.e., to decide which partials should be assigned to each voice. These classification labels are then used in statistical optimal grouping decisions and confidence levels are assigned to each decision. This classification algorithm shows promising results for the musical source separation.


Journal of the Acoustical Society of America | 2010

The Shape of Musical Sound: Real-time Visualizations of Expressiveness in Music Performance

Ren Gang; Justin Lundberg; Mark F. Bocko; Dave Headlam

We present a framework to provide a quantitative representation of aspects of musical sound that are associated with musical expressiveness and emotions. After a brief introduction to the background of expressive features in music, we introduce a score to audio mapping algorithm based on dynamic time warping, which segments the audio by comparing it to a music score. Expressive feature extraction algorithms are then introduced. The algorithms extract an expressive feature set that includes pitch deviation, loudness, timbre, timing, articulation, and modulation from the segmented audio to construct an expressive feature database. We have demonstrated these tools in the context of solo western classical music, specifically for the solo oboe. We also discuss potential applications to music performance education and music “language” processing.


international conference on consumer electronics | 2012

Audio phase singularity detection for room acoustics parameter estimation

Ren Gang; Gregory Bocko; Stephen Roessner; Justin Lundberg; Dave Headlam; Mark F. Bocko

A real-time adaptive noise masking method for ambient interference mitigation is proposed. The noise masker is implemented below the auditory masking surface of concurrent music and is thus inaudible. The noise masking parameters are also adapted to ambient interferences and room acoustics to improve masking efficiency.


international conference on digital signal processing | 2011

Generative modeling of temporal signal features using hierarchical probabilistic graphical models

Ren Gang; Gregory Bocko; Justin Lundberg; Dave Headlam; Mark F. Bocko

Despite the complex multi-dimensional nature of musical expression in the final analysis, musical expression is conveyed by sound. Therefore the expressiveness of music must be present in the sound and therefore should be observable as fundamental and emergent features of the sonic signal. To gain insight into this feature space, a real-time visualization tool has been developed. The fundamental physical features-pitch, dynamic level, and timbre (as represented by spectral energy distribution)-are extracted from the audio signal and displayed versus time in a real-time animation. Emergent properties of the sound, such as musical attacks and releases, the dynamic shaping of musical lines, timing of note placements, and the subtle modulation of the tone, loudness, and timbre can be inferred from the fundamental feature set and presented to the user visually. This visualization tool provides a stimulating music performance-learning environment to help musicians achieve their artistic goals more effectively. ...


international conference on acoustics, speech, and signal processing | 2010

Reverberation features identification from music recordings using the discrete wavelet transform

Ren Gang; Mark F. Bocko; Dave Headlam

An acoustical measurement method is presented for room acoustical parameter estimation. The proposed method collects the response of sinusoidal test signals and detects the audio phase singularity points as room acoustical features.


Journal of the Acoustical Society of America | 2009

Time-frequency Test Signal Synthesis for Acoustic Measurements During Music Concerts

Gang Ren; Mark F. Bocko; Dave Headlam

We propose generative modeling algorithms that analyze the temporal features of non-stationary signals and represent their temporal structural dependencies using hierarchical probabilistic graphical models. First, several template sampling methods are introduced to embed the temporal signal features into multiple instantiations of statistical variables. Then the learning schemes that obtain hierarchical probabilistic graphical models from data instantiations are detailed. Based on the sampled temporal instantiations, multiple probabilistic graphical models are discovered and fit to the signal support regions. The evolution structure of these graphical models is depicted using a higher-level structural model. Finally, performance evaluations based on both simulated datasets and audio feature dataset are presented.


international conference on consumer electronics | 2013

An additional “Depth” of reverberation helps content stand out: Media content emphasis using audio reverberation effect

Ren Gang; Samarth Hosakere Shivaswamy; Stephen Roessner; Mark F. Bocko; Dave Headlam

This paper presents a method of extracting reverberation features from music recordings. First, we perform a short time Fourier transform to transform the audio signal into a 2D time-frequency representation in which reverberation features appear as blurring of spectral features in the time dimension. Employing image analysis method we may quantitatively estimate the amount of reverberation by transforming the STFT “image” to a wavelet domain where we can perform efficient edge detection and characterization. Experiments demonstrate that quantitative estimates of reverberation time extracted in this way are strongly correlated with physical measurements.

Collaboration


Dive into the Dave Headlam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ren Gang

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge