Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam M. Stark is active.

Publication


Featured researches published by Adam M. Stark.


IEEE Transactions on Audio, Speech, and Language Processing | 2016

Automatic Environmental Sound Recognition: Performance Versus Computational Cost

Siddharth Sigtia; Adam M. Stark; Sacha Krstulovic; Mark D. Plumbley

In the context of the Internet of Things, sound sensing applications are required to run on embedded platforms where notions of product pricing and form factor impose hard constraints on the available computing power. Whereas Automatic Environmental Sound Recognition (AESR) algorithms are most often developed with limited consideration for computational cost, this paper seeks which AESR algorithm can make the most of a limited amount of computing power by comparing the sound classification performance as a function of its computational cost. Results suggest that Deep Neural Networks yield the best ratio of sound classification accuracy across a range of computational costs, while Gaussian Mixture Models offer a reasonable accuracy at a consistently small cost, and Support Vector Machines stand between both in terms of compromise between accuracy and computational cost.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Performance Following: Real-Time Prediction of Musical Sequences Without a Score

Adam M. Stark; Mark D. Plumbley

This paper introduces a technique for predicting harmonic sequences in a musical performance for which no score is available, using real-time audio signals. Recent short-term information is aligned with longer term information, contextualizing the present within the past, allowing predictions about the future of the performance to be made. Using a mid-level representation in the form of beat-synchronous harmonic sequences, we reduce the size of the information needed to represent the performance. This allows the implementation of real-time performance following in live performance situations. We conduct an objective evaluation on a database of rock, pop, and folk music. Our results show that we are able to predict a large majority of repeated harmonic content with no prior knowledge in the form of a score.


international conference on acoustics, speech, and signal processing | 2010

Performance following: Tracking a performance without a score

Adam M. Stark; Mark D. Plumbley

We present a technique for following a live performance in the situation where a score is not available. Making use of a local alignment between recent and longer term musical information, we place the present in the context of the past, allowing the prediction of future performance information. By representing music as sequences of beat-synchronous features we reduce the size of the information needed to represent the performance and allow performance following in real-time to occur.


new interfaces for musical expression | 2007

Real-time beat-synchronous audio effects

Adam M. Stark; Mark D. Plumbley; Matthew E. P. Davies

We present a new group of audio effects that use beat tracking, the detection of beats in an audio signal, to relate effect parameters to the beats in an input signal. Conventional audio effects are augmented so that their operation is related to the output of a beat tracking system. We present a temposynchronous delay effect and a set of beat synchronous low frequency oscillator effects including tremolo, vibrato and auto-wah. All effects are implemented in real-time as VST plug-ins to allow for their use in live performance.


international computer music conference | 2009

Real-Time Chord Recognition for Live Performance

Adam M. Stark; Mark D. Plumbley


human factors in computing systems | 2013

The space between the notes: adding expressive pitch control to the piano keyboard

Andrew McPherson; Adrian Gierakowski; Adam M. Stark


ICMC | 2011

Real-time Visual Beat Tracking using a Comb Filter Matrix.

Andrew Robertson; Adam M. Stark; Mark D. Plumbley


Journal of The Audio Engineering Society | 2007

Audio Effects for Real-Time Performance Using Beat Tracking

Adam M. Stark; Mark D. Plumbley; Matthew E. P. Davies


international computer music conference | 2008

RHYTHMIC ANALYSIS FOR REAL-TIME AUDIO EFFECTS

Adam M. Stark; Matthew E. P. Davies; Mark D. Plumbley


new interfaces for musical expression | 2014

Improvasher: A Real-Time Mashup System for Live Musical Input.

Matthew E. P. Davies; Adam M. Stark; Fabien Gouyon; Masataka Goto

Collaboration


Dive into the Adam M. Stark's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew E. P. Davies

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Adrian Gierakowski

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Andrew McPherson

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Andrew Robertson

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Siddharth Sigtia

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masataka Goto

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge