Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Paisley is active.

Publication


Featured researches published by John Paisley.


IEEE Transactions on Image Processing | 2012

Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

Mingyuan Zhou; Haojun Chen; John Paisley; Lu Ren; Lingbo Li; Zhengming Xing; David B. Dunson; Guillermo Sapiro; Lawrence Carin

Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature.


international conference on machine learning | 2009

Nonparametric factor analysis with beta process priors

John Paisley; Lawrence Carin

We propose a nonparametric extension to the factor analysis problem using a beta process prior. This beta process factor analysis (BP-FA) model allows for a dataset to be decomposed into a linear combination of a sparse set of factors, providing information on the underlying structure of the observations. As with the Dirichlet process, the beta process is a fully Bayesian conjugate prior, which allows for analytical posterior calculation and straightforward inference. We derive a varia-tional Bayes inference algorithm and demonstrate the model on the MNIST digits and HGDP-CEPH cell line panel datasets.


IEEE Transactions on Signal Processing | 2010

Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds

Minhua Chen; Jorge Silva; John Paisley; Chunping Wang; David B. Dunson; Lawrence Carin

Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a low-dimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily-computed quantities, drawing on block-sparsity properties. The proposed methodology is validated on several synthetic and real datasets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Nested Hierarchical Dirichlet Processes

John Paisley; Chong Wang; David M. Blei; Michael I. Jordan

We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP generalizes the nested Chinese restaurant process (nCRP) to allow each word to follow its own path to a topic node according to a per-document distribution over the paths on a shared tree. This alleviates the rigid, single-path formulation assumed by the nCRP, allowing documents to easily express complex thematic borrowings. We derive a stochastic variational inference algorithm for the model, which enables efficient inference for massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 2.7 million documents from Wikipedia.


IEEE Transactions on Signal Processing | 2007

Music Analysis Using Hidden Markov Mixture Models

Yuting Qi; John Paisley; Lawrence Carin

We develop a hidden Markov mixture model based on a Dirichlet process (DP) prior, for representation of the statistics of sequential data for which a single hidden Markov model (HMM) may not be sufficient. The DP prior has an intrinsic clustering property that encourages parameter sharing, and this naturally reveals the proper number of mixture components. The evaluation of posterior distributions for all model parameters is achieved in two ways: 1) via a rigorous Markov chain Monte Carlo method; and 2) approximately and efficiently via a variational Bayes formulation. Using DP HMM mixture models in a Bayesian setting, we propose a novel scheme for music analysis, highlighting the effectiveness of the DP HMM mixture model. Music is treated as a time-series data sequence and each music piece is represented as a mixture of HMMs. We approximate the similarity of two music pieces by computing the distance between the associated HMM mixtures. Experimental results are presented for synthesized sequential data and from classical music clips. Music similarities computed using DP HMM mixture modeling are compared to those computed from Gaussian mixture modeling, for which the mixture modeling is also performed using DP. The results show that the performance of DP HMM mixture modeling exceeds that of the DP Gaussian mixture modeling.


IEEE Transactions on Signal Processing | 2009

Hidden Markov Models With Stick-Breaking Priors

John Paisley; Lawrence Carin

The number of states in a hidden Markov model (HMM) is an important parameter that has a critical impact on the inferred model. Bayesian approaches to addressing this issue include the nonparametric hierarchical Dirichlet process, which does not extend to a variational Bayesian (VB) solution. We present a fully conjugate, Bayesian approach to determining the number of states in a HMM, which does have a variational solution. The infinite-state HMM presented here utilizes a stick-breaking construction for each row of the state transition matrix, which allows for a sparse utilization of the same subset of observation parameters by all states. In addition to our variational solution, we discuss retrospective and collapsed Gibbs sampling methods for MCMC inference. We demonstrate our model on a music recommendation problem containing 2250 pieces of music from the classical, jazz, and rock genres.


IEEE Transactions on Image Processing | 2014

Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI

Yue Huang; John Paisley; Qin Lin; Xinghao Ding; Xueyang Fu; Xiao-Ping Zhang

We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled \(k \) -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Hierarchical Bayesian Modeling of Topics in Time-Stamped Documents

Iulian Pruteanu-Malinici; Lu Ren; John Paisley; Eric Wang; Lawrence Carin

We consider the problem of inferring and modeling topics in a sequence of documents with known publication dates. The documents at a given time are each characterized by a topic and the topics are drawn from a mixture model. The proposed model infers the change in the topic mixture weights as a function of time. The details of this general framework may take different forms, depending on the specifics of the model. For the examples considered here, we examine base measures based on independent multinomial-Dirichlet measures for representation of topic-dependent word counts. The form of the hierarchical model allows efficient variational Bayesian inference, of interest for large-scale problems. We demonstrate results and make comparisons to the model when the dynamic character is removed, and also compare to latent Dirichlet allocation (LDA) and Topics over Time (TOT). We consider a database of Neural Information Processing Systems papers as well as the US Presidential State of the Union addresses from 1790 to 2008.


Bayesian Analysis | 2012

The Discrete Innite Logistic Normal Distribution

John Paisley; Chong Wang; David M. Blei

We present the discrete innite logistic normal distribution (DILN), a Bayesian nonparametric prior for mixed membership models. DILN generalizes the hierarchical Dirichlet process (HDP) to model correlation structure between the weights of the atoms at the group level. We derive a representation of DILN as a normalized collection of gamma-distributed random variables and study its statistical properties. We derive a variational inference algorithm for approximate posterior inference. We apply DILN to topic modeling of documents and study its empirical performance on four corpora, comparing performance with the HDP and the correlated topic model (CTM). To compute with large-scale data, we develop a stochastic variational inference algorithm for DILN and compare with similar algorithms for HDP and latent Dirichlet allocation (LDA) on a collection of 350; 000 articles from Nature.


sensor array and multichannel signal processing workshop | 2010

Nonparametric Bayesian matrix completion

Mingyuan Zhou; Chunping Wang; Minhua Chen; John Paisley; David B. Dunson; Lawrence Carin

The Beta-Binomial processes are considered for inferring missing values in matrices. The model moves beyond the low-rank assumption, modeling the matrix columns as residing in a nonlinear subspace. Large-scale problems are considered via efficient Gibbs sampling, yielding predictions as well as a measure of confidence in each prediction. Algorithm performance is considered for several datasets, with encouraging performance relative to existing approaches.

Collaboration


Dive into the John Paisley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chong Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge