Ian Porteous
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ian Porteous.
knowledge discovery and data mining | 2008
Ian Porteous; David Newman; Alexander T. Ihler; Arthur U. Asuncion; Padhraic Smyth; Max Welling
In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.
computer vision and pattern recognition | 2008
Evgeniy Bart; Ian Porteous; Pietro Perona; Max Welling
As more images and categories become available, organizing them becomes crucial. We present a novel statistical method for organizing a collection of images into a tree-shaped hierarchy. The method employs a non-parametric Bayesian model and is completely unsupervised. Each image is associated with a path through a tree. Similar images share initial segments of their paths and therefore have a smaller distance from each other. Each internal node in the hierarchy represents information that is common to images whose paths pass through that node, thus providing a compact image representation. Our experiments show that a disorganized collection of images will be organized into an intuitive taxonomy. Furthermore, we find that the taxonomy allows good image categorization and, in this respect, is superior to the popular LDA model.
information theory and applications | 2012
Max Welling; Ian Porteous; Kenichi Kurihara
Nonparametric Bayesian methods offer a convenient paradigm to deal with uncertain model structure. However, priors such as the (hierarchical) Dirichlet process prior on partitions and the Indian buffet process prior on binary matrices are not always flexible enough to express our prior beliefs. We propose a much larger family of nonparametric exchangeable priors by relaxing the concept of consistency. We discuss the consequences of this point of view and propose novel ways to specify and learn these priors. In particular, we introduce new flexible priors and inference procedures to extend the DP, HDP and IBP models. An experiment on text data illustrates how flexible priors can be useful to increase our modeling capabilities.
national conference on artificial intelligence | 2010
Ian Porteous; Arthur U. Asuncion; Max Welling
national conference on artificial intelligence | 2008
Ian Porteous; Evgeniy Bart; Max Welling
uncertainty in artificial intelligence | 2006
Ian Porteous; Alexander T. Ihler; Padhraic Smyth; Max Welling
Archive | 2012
Arthur U. Asuncion; Padhraic Smyth; Max Welling; David Newman; Ian Porteous; Scott Triglia
Archive | 2012
Eldar A. Musayev; Nikunj Bhagat; Ian Porteous; Laramie Leavitt; Matthew Nichols
neural information processing systems | 2007
Max Welling; Ian Porteous; Evgeniy Bart
neural information processing systems | 2007
Max Welling; Ian Porteous; Evgeniy Bart