Jacquelyn A. Shelton
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacquelyn A. Shelton.
Pattern Recognition Letters | 2011
Matthew B. Blaschko; Jacquelyn A. Shelton; A Bartels; Christoph H. Lampert; Arthur Gretton
Kernel canonical correlation analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying process that generates the data and can ignore high-variance noise directions. However, for data where acquisition in one or more modalities is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.
Neural Computation | 2017
Jacquelyn A. Shelton; Jan Gasthaus; Zhenwen Dai; Jörg Lücke; Arthur Gretton
We propose a nonparametric procedure to achieve fast inference in generative graphical models when the number of latent states is very large. The approach is based on iterative latent variable preselection, where we alternate between learning a selection function to reveal the relevant latent variables and using this to obtain a compact approximation of the posterior distribution for EM. This can make inference possible where the number of possible latent states is, for example, exponential in the number of latent variables, whereas an exact approach would be computationally infeasible. We learn the selection function entirely from the observed data and current expectation-maximization state via gaussian process regression. This is in contrast to earlier approaches, where selection functions were manually designed for each problem setting. We show that our approach performs as well as these bespoke selection functions on a wide variety of inference problems. In particular, for the challenging case of a hierarchical model for object localization with occlusion, we achieve results that match a customized state-of-the-art selection method at a far lower computational cost.
neural information processing systems | 2011
Jacquelyn A. Shelton; Abdul Saboor Sheikh; Pietro Berkes; Joerg Bornschein; Joerg Luecke
neural information processing systems | 2009
A Bartels; Matthew B. Blaschko; Jacquelyn A. Shelton
neural information processing systems | 2012
Philip Sterne; Joerg Bornschein; Abdul-saboor Sheikh; Joerg Luecke; Jacquelyn A. Shelton
Archive | 2009
Jacquelyn A. Shelton; Matthew B. Blaschko; A Bartels
arXiv: Machine Learning | 2012
Abdul-Saboor Sheikh; Jacquelyn A. Shelton; Jörg Lücke
neural information processing systems | 2010
Jacquelyn A. Shelton; Matthew B. Blaschko; Arthur Gretton; Müller J, Fischer, E; A Bartels
Archive | 2010
Jacquelyn A. Shelton
Archive | 2012
Zhenwen Dai; Jacquelyn A. Shelton; Jörg Bornschein; Abdul Saboor Sheikh; Jörg Lücke