Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Max Welling is active.

Publication


Featured researches published by Max Welling.


european conference on computer vision | 2000

Unsupervised Learning of Models for Recognition

Markus Weber; Max Welling; Pietro Perona

We present a method to learn object class models from unlabeled and unsegmented cluttered scenes for the purpose of visual object recognition. We focus on a particular type of model where objects are represented as flexible constellations of rigid parts (features). The variability within a class is represented by a joint probability density function (pdf) on the shape of the constellation and the output of part detectors. In a first stage, the method automatically identifies distinctive parts in the training set by applying a clustering algorithm to patterns selected by an interest operator. It then learns the statistical shape model using expectation maximization. The method achieves very good classification results on human faces and rear views of cars.


knowledge discovery and data mining | 2008

Fast collapsed gibbs sampling for latent dirichlet allocation

Ian Porteous; David Newman; Alexander T. Ihler; Arthur U. Asuncion; Padhraic Smyth; Max Welling

In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.


Pattern Recognition Letters | 2001

Positive tensor factorization

Max Welling; Markus Weber

Abstract A novel fixed point algorithm for positive tensor factorization (PTF) is introduced. The update rules efficiently minimize the reconstruction error of a positive tensor over positive factors. Tensors of arbitrary order can be factorized, which extends earlier results in the literature. Experiments show that the factors of PTF are easier to interpret than those produced by methods based on the singular value decomposition, which might contain negative values. We also illustrate the tendency of PTF to generate sparsely distributed codes.


Classical and Quantum Gravity | 1998

Quantum mechanics of a point particle in (2+1)-dimensional gravity

Hans-Juergen Matschull; Max Welling

We study the phase space structure and the quantization of a pointlike particle in (2 + 1)-dimensional gravity. By adding boundary terms to the first-order Einstein-Hilbert action, and removing all redundant gauge degrees of freedom, we arrive at a reduced action for a gravitating particle in 2 + 1 dimensions, which is invariant under Lorentz transformations and a group of generalized translations. The momentum space of the particle turns out to be the group manifold SL(2). Its position coordinates have non-vanishing Poisson brackets, resulting in a non-commutative quantum spacetime. We use the representation theory of SL(2) to investigate its structure. We find a discretization of time, and some semi-discrete structure of space. An uncertainty relation forbids a fully localized particle. The quantum dynamics is described by a discretized Klein-Gordon equation.


computer vision and pattern recognition | 2008

Unsupervised learning of visual taxonomies

Evgeniy Bart; Ian Porteous; Pietro Perona; Max Welling

As more images and categories become available, organizing them becomes crucial. We present a novel statistical method for organizing a collection of images into a tree-shaped hierarchy. The method employs a non-parametric Bayesian model and is completely unsupervised. Each image is associated with a path through a tree. Similar images share initial segments of their paths and therefore have a smaller distance from each other. Each internal node in the hierarchy represents information that is common to images whose paths pass through that node, thus providing a compact image representation. Our experiments show that a disorganized collection of images will be organized into an intuitive taxonomy. Furthermore, we find that the taxonomy allows good image categorization and, in this respect, is superior to the popular LDA model.


Neural Computation | 2006

Topographic Product Models Applied to Natural Scene Statistics

Simon Osindero; Max Welling; Geoffrey E. Hinton

We present an energy-based model that uses a product of generalized Student-t distributions to capture the statistical structure in data sets. This model is inspired by and particularly applicable to natural data sets such as images. We begin by providing the mathematical framework, where we discuss complete and overcomplete models and provide algorithms for training these models from data. Using patches of natural scenes, we demonstrate that our approach represents a viable alternative to independent component analysis as an interpretive model of biological visual systems. Although the two approaches are similar in flavor, there are also important differences, particularly when the representations are overcomplete. By constraining the interactions within our model, we are also able to study the topographic organization of Gabor-like receptive fields that our model learns. Finally, we discuss the relation of our new approach to previous workin particular, gaussian scale mixture models and variants of independent components analysis.


knowledge discovery and data mining | 2013

Stochastic collapsed variational Bayesian inference for latent Dirichlet allocation

James R. Foulds; Levi Boyles; Christopher DuBois; Padhraic Smyth; Max Welling

There has been an explosion in the amount of digital text information available in recent years, leading to challenges of scale for traditional inference algorithms for topic models. Recent advances in stochastic variational inference algorithms for latent Dirichlet allocation (LDA) have made it feasible to learn topic models on very large-scale corpora, but these methods do not currently take full advantage of the collapsed representation of the model. We propose a stochastic algorithm for collapsed variational Bayesian inference for LDA, which is simpler and more efficient than the state of the art method. In experiments on large-scale text corpora, the algorithm was found to converge faster and often to a better solution than previous methods. Human-subject experiments also demonstrated that the method can learn coherent topics in seconds on small corpora, facilitating the use of topic models in interactive document analysis software.


Neural Computation | 2009

Bayesian k-Means as a Maximization-expectation algorithm

Kenichi Kurihara; Max Welling

We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.


Artificial Intelligence | 2003

Approximate inference in Boltzmann machines

Max Welling; Yee Whye Teh

Inference in Boltzmann machines is NP-hard in general. As a result approximations are often necessary. We discuss first order mean field and second order Onsager truncations of the Plefka expansion of the Gibbs free energy. The Bethe free energy is introduced and rewritten as a Gibbs free energy. From there a convergent belief optimization algorithm is derived to minimize the Bethe free energy. An analytic expression for the linear response estimate of the covariances is found which is exact on Boltzmann trees. Finally, a number of theorems is proven concerning the Plefka expansion, relating the first order mean field and the second order Onsager approximation to the Bethe approximation. Experiments compare mean field approximation, Onsager approximation, belief propagation and belief optimization.


ieee international conference on automatic face and gesture recognition | 2000

Viewpoint-invariant learning and detection of human heads

Markus Weber; Wolfgang Einhäuser; Max Welling; Pietro Perona

We present a method to learn models of human heads for the purpose of detection from different viewing angles. We focus on a model where objects are represented as constellations of rigid features (parts). Variability is represented by a joint probability density function (PDF) on the shape of the constellation. In the first stage, the method automatically identifies distinctive features in the training set using an interest operator followed by vector quantization. The set of model parameters, including the shape PDF, is then learned using expectation maximization. Experiments show good generalization performance to novel viewpoints and unseen faces. Performance is above 90% correct with less than 1 s computation time per image.

Collaboration


Dive into the Max Welling's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yutian Chen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ian Porteous

University of California

View shared research outputs
Top Co-Authors

Avatar

Padhraic Smyth

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Weber

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward Meeds

University of Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge