Justin Lundberg
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justin Lundberg.
workshop on applications of signal processing to audio and acoustics | 2009
Ren Gang; Mark F. Bocko; Dave Headlam; Justin Lundberg
In this paper we present a transcription method for polyphonic music. The short time Fourier transform is used first to decompose an acoustic signal into sonic partials in a time-frequency representation. In general the segmented partials exhibit distinguishable features if they originate from different “voices” in the polyphonic mix. We define feature vectors and utilize a max-margin classification algorithm to produce classification labels to serve as grouping cues, i.e., to decide which partials should be assigned to each voice. These classification labels are then used in statistical optimal grouping decisions and confidence levels are assigned to each decision. This classification algorithm shows promising results for the musical source separation.
international conference on digital signal processing | 2011
Ren Gang; Justin Lundberg; Gregory Bocko; Dave Headlam; Mark F. Bocko
We present a framework to provide a quantitative representation of aspects of musical sound that are associated with musical expressiveness and emotions. After a brief introduction to the background of expressive features in music, we introduce a score to audio mapping algorithm based on dynamic time warping, which segments the audio by comparing it to a music score. Expressive feature extraction algorithms are then introduced. The algorithms extract an expressive feature set that includes pitch deviation, loudness, timbre, timing, articulation, and modulation from the segmented audio to construct an expressive feature database. We have demonstrated these tools in the context of solo western classical music, specifically for the solo oboe. We also discuss potential applications to music performance education and music “language” processing.
international conference on consumer electronics | 2011
Ren Gang; Gregory Bocko; Justin Lundberg; Mark F. Bocko; Dave Headlam
A real-time adaptive noise masking method for ambient interference mitigation is proposed. The noise masker is implemented below the auditory masking surface of concurrent music and is thus inaudible. The noise masking parameters are also adapted to ambient interferences and room acoustics to improve masking efficiency.
Journal of the Acoustical Society of America | 2010
Ren Gang; Justin Lundberg; Mark F. Bocko; Dave Headlam
Despite the complex multi-dimensional nature of musical expression in the final analysis, musical expression is conveyed by sound. Therefore the expressiveness of music must be present in the sound and therefore should be observable as fundamental and emergent features of the sonic signal. To gain insight into this feature space, a real-time visualization tool has been developed. The fundamental physical features-pitch, dynamic level, and timbre (as represented by spectral energy distribution)-are extracted from the audio signal and displayed versus time in a real-time animation. Emergent properties of the sound, such as musical attacks and releases, the dynamic shaping of musical lines, timing of note placements, and the subtle modulation of the tone, loudness, and timbre can be inferred from the fundamental feature set and presented to the user visually. This visualization tool provides a stimulating music performance-learning environment to help musicians achieve their artistic goals more effectively. ...
international conference on consumer electronics | 2012
Ren Gang; Gregory Bocko; Stephen Roessner; Justin Lundberg; Dave Headlam; Mark F. Bocko
An acoustical measurement method is presented for room acoustical parameter estimation. The proposed method collects the response of sinusoidal test signals and detects the audio phase singularity points as room acoustical features.
international conference on digital signal processing | 2011
Ren Gang; Gregory Bocko; Justin Lundberg; Dave Headlam; Mark F. Bocko
We propose generative modeling algorithms that analyze the temporal features of non-stationary signals and represent their temporal structural dependencies using hierarchical probabilistic graphical models. First, several template sampling methods are introduced to embed the temporal signal features into multiple instantiations of statistical variables. Then the learning schemes that obtain hierarchical probabilistic graphical models from data instantiations are detailed. Based on the sampled temporal instantiations, multiple probabilistic graphical models are discovered and fit to the signal support regions. The evolution structure of these graphical models is depicted using a higher-level structural model. Finally, performance evaluations based on both simulated datasets and audio feature dataset are presented.
international conference on consumer electronics | 2012
Ren Gang; Gregory Bocko; Justin Lundberg; Stephen Roessner; Mark F. Bocko; Dave Headlam
Semantic musical features are proposed to reflect the understanding of the music, instead of the music itself, and serve as idea interfaces for musical “meaning” based human-computer interactions. The proposed semantic musical features are based on reductive music analysis and musical expressive features.
international conference on consumer electronics | 2011
Ren Gang; Justin Lundberg; Gregory Bocko; Dave Headlam; Mark F. Bocko
The listening quality of noise cancelling headphones is compromised by the distortion of perceptual aural environment. We introduce signal processing algorithms that establish an amenable aural environment by embedding early reverberation signature to the residual noise. The authors also provide performance evaluations and a summary.
international conference on consumer electronics | 2011
Ren Gang; Gregory Bocko; Justin Lundberg; Dave Headlam; Mark F. Bocko
Musical instruments, both acoustic and electronic, can be implemented as human control interfaces for electronic game applications. The control parameters are extracted from instrument performances and further mapped to game control parameters. The authors also illustrate the details of response feedback and provide a summary.
Journal of the Acoustical Society of America | 2011
Gang Ren; Dave Headlam; Stephen Roessner; Justin Lundberg; Mark F. Bocko
In music performance, the musician adds artistic expressive elements beyond the information contained in conventional western music scores. As micro-fluctuations these expressive dimensions convey emotions and essential interpretative information and can be measured and compared quantitatively, over large and small scales, and evaluated for their effect on aspects of the performance. We present a heterogeneous expressive music feature description that include both inter-note features that extend over musical phrases composed of several notes, and intra-note features that represent the internal variations within each musical note. The intra-note features include pitch and pitch deviation, dynamic level, timber, articulation and vibrato. The inter-note features include timing and dynamics, as well as timber, pitch deviation, articulation, and vibrato extending across multiple notes and musical phrases. A complete multi-dimensional feature description for every note is unnecessary because there is a hierarch...