Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Weller is active.

Publication


Featured researches published by Adrian Weller.


international conference on machine learning and applications | 2009

Structured Prediction Models for Chord Transcription of Music Audio

Adrian Weller; Daniel P. W. Ellis; Tony Jebara

Chord sequences are a compact and useful description of music, representing each beat or measure in terms of a likely distribution over individual notes without specifying the notes exactly. Transcribing music audio into chord sequences is essential for harmonic analysis, and would be an important component in content-based retrieval and indexing, but accuracy rates remain fairly low. In this paper, the existing 2008 LabROSA Supervised Chord Recognition System is modified by using different machine learning methods for decoding structural information, thereby achieving significantly superior results. Specifically, the hidden Markov model is replaced by a large margin structured prediction approach (SVMstruct) using an enlarged feature space. Performance is significantly improved by incorporating features from future (but not past) frames. The benefit of SVMstruct increases with the size of the training set, as might be expected when comparing discriminative and generative models. Without yet exploring non-linear kernels, these improvements lead to state-of-the-art performance in chord transcription. The techniques could prove useful in other sequential learning tasks which currently employ HMMs.


uncertainty in artificial intelligence | 2014

Approximating the Bethe partition function

Adrian Weller; Tony Jebara

When belief propagation (BP) converges, it does so to a stationary point of the Bethe free energy F, and is often strikingly accurate. However, it may converge only to a local optimum or may not converge at all. An algorithm was recently introduced by Weller and Jebara for attractive binary pairwise MRFs which is guaranteed to return an e-approximation to the global minimum of F in polynomial time provided the maximum degree Δ = O(log n), where n is the number of variables. Here we extend their approach and derive a new method based on analyzing first derivatives of F, which leads to much better performance and, for attractive models, yields a fully polynomial-time approximation scheme (FPTAS) without any degree restriction. Further, our methods apply to general (non-attractive) models, though with no polynomial time guarantee in this case, demonstrating that approximating log of the Bethe partition function, log ZB = - min F, for a general model to additive e-accuracy may be reduced to a discrete MAP inference problem. This allows the merits of the global Bethe optimum to be tested.


MIREX 2010 | 2010

The 2010 LabROSA Chord Recognition System

Daniel P. W. Ellis; Adrian Weller

For the MIREX 2010 Audio Chord Extraction task, we submitted a total of four systems. Our base system is a trainable chord recognizer based on two-band chroma representations and using a Structured SVM classifier to replace the more familiar hidden Markov model. We submit two versions of this system, one which transposes all training data through all 12 possible chords to maximize the training data available for each chord (and hence improve generalization to rarely-seen chords and keys), and one which simply trains on the chords in their original transposition, leading to a smaller model and possible learning of key-specific features. We also submit two pre-trained models, based on these two frameworks, trained in-house on the 180 Beatles and 20 Queen tracks for which groundtruth chord labels have been made available.


international conference on artificial intelligence and statistics | 2013

Bethe Bounds and Approximating the Global Optimum

Adrian Weller; Tony Jebara

Inference in general Markov random fields (MRFs) is NP-hard, though identifying the maximum a posteriori (MAP) configuration of pairwise MRFs with submodular cost functions is efficiently solvable using graph cuts. Marginal inference, however, even for this restricted class, is in #P. We prove new formulations of derivatives of the Bethe free energy, provide bounds on the derivatives and bracket the locations of stationary points, introducing a new technique called Bethe bound propagation. Several results apply to pairwise models whether associative or not. Applying these to discretized pseudo-marginals in the associative case we present a polynomial time approximation scheme for global optimization provided the maximum degree is


international joint conference on artificial intelligence | 2017

Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning

Rowan McAllister; Yarin Gal; Alex Kendall; Mark van der Wilk; Amar Shah; Roberto Cipolla; Adrian Weller

O(\log n)


international conference on machine learning | 2016

Train and test tightness of LP relaxations in structured prediction

Ofer Meshi; Mehrdad Mahdavi; Adrian Weller; David Sontag

, and discuss several extensions.


international conference on machine learning | 2017

Lost Relatives of the Gumbel Trick

Matej Balog; Nilesh Tripuraneni; Zoubin Ghahramani; Adrian Weller

Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI.


Archive | 2014

Methods for Inference in Graphical Models

Adrian Weller

Structured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation to the striking observation that approximations based on linear programming (LP) relaxations are often tight on real-world instances. In particular, we show that learning with LP relaxed inference encourages integrality of training instances, and that tightness generalizes from train to test data.


international world wide web conferences | 2018

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

Nina Grgić-Hlača; Elissa M. Redmiles; Krishna P. Gummadi; Adrian Weller

© 2017 International Machine Learning Society (IMLS). All rights reserved. The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method relies on repeatedly applying a random perturbation to the distribution in a particular way, each time solving for the most likely configuration. We derive an entire family of related methods, of which the Gumbel trick is one member, and show that the new methods have superior properties in several settings with minimal additional computational cost. In particular, for the Gumbel trick to yield computational benefits for discrete graphical models, Gumbel perturbations on all configurations are typically replaced with socalled low-rank perturbations. We show how a subfamily of our new methods adapts to this setting, proving new upper and lower bounds on the log partition function and deriving a family of sequential samplers for the Gibbs distribution. Finally, we balance the discussion by showing how the simpler analytical form of the Gumbel trick enables additional theoretical results.


knowledge discovery and data mining | 2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual &Group Unfairness via Inequality Indices

Till Speicher; Hoda Heidari; Nina Grgić-Hlača; Krishna P. Gummadi; Adish Singla; Adrian Weller; Muhammad Bilal Zafar

Methods for Inference in Graphical Models

Collaboration


Dive into the Adrian Weller's collaboration.

Top Co-Authors

Avatar

Mark Rowland

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aldo Pacchiano

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge