Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arthur Gretton is active.

Publication


Featured researches published by Arthur Gretton.


algorithmic learning theory | 2005

Measuring statistical dependence with hilbert-schmidt norms

Arthur Gretton; Olivier Bousquet; Alexander J. Smola; Bernhard Schölkopf

We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.


algorithmic learning theory | 2007

A Hilbert Space Embedding for Distributions

Alexander J. Smola; Arthur Gretton; Le Song; Bernhard Schölkopf

We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in two-sample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation.


The Journal of Neuroscience | 2008

Low-Frequency Local Field Potentials and Spikes in Primary Visual Cortex Convey Independent Visual Information

Andrei Belitski; Arthur Gretton; Cesare Magri; Yusuke Murayama; Marcelo A. Montemurro; Nk Logothetis; Stefano Panzeri

Local field potentials (LFPs) reflect subthreshold integrative processes that complement spike train measures. However, little is yet known about the differences between how LFPs and spikes encode rich naturalistic sensory stimuli. We addressed this question by recording LFPs and spikes from the primary visual cortex of anesthetized macaques while presenting a color movie. We then determined how the power of LFPs and spikes at different frequencies represents the visual features in the movie. We found that the most informative LFP frequency ranges were 1–8 and 60–100 Hz. LFPs in the range of 12–40 Hz carried little information about the stimulus, and may primarily reflect neuromodulatory inputs. Spike power was informative only at frequencies <12 Hz. We further quantified “signal correlations” (correlations in the trial-averaged power response to different stimuli) and “noise correlations” (trial-by-trial correlations in the fluctuations around the average) of LFPs and spikes recorded from the same electrode. We found positive signal correlation between high-gamma LFPs (60–100 Hz) and spikes, as well as strong positive signal correlation within high-gamma LFPs, suggesting that high-gamma LFPs and spikes are generated within the same network. LFPs <24 Hz shared strong positive noise correlations, indicating that they are influenced by a common source, such as a diffuse neuromodulatory input. LFPs <40 Hz showed very little signal and noise correlations with LFPs >40 Hz and with spikes, suggesting that low-frequency LFPs reflect neural processes that in natural conditions are fully decoupled from those giving rise to spikes and to high-gamma LFPs.


international conference on machine learning | 2007

Supervised feature selection via dependence estimation

Le Song; Alexander J. Smola; Arthur Gretton; Karsten M. Borgwardt; Justin Bedo

We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.


Annals of Statistics | 2013

Equivalence of distance-based and RKHS-based statistics in hypothesis testing

Dino Sejdinovic; Bharath K. Sriperumbudur; Arthur Gretton; Kenji Fukumizu

We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, maximum mean discrepancies (MMD), that is, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. In the case where the energy distance is computed with a semimetric of negative type, a positive definite kernel, termed distance kernel, may be defined such that the MMD corresponds exactly to the energy distance. Conversely, for any positive definite kernel, we can interpret the MMD as energy distance with respect to some negative-type semimetric. This equivalence readily extends to distance covariance using kernels on the product space. We determine the class of probability distributions for which the test statistics are consistent against all alternatives. Finally, we investigate the performance of the family of distance kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choices from this family can yield more powerful tests.


Signal Processing | 2006

An online support vector machine for abnormal events detection

Manuel Davy; Frédéric Desobry; Arthur Gretton; Christian Doncarli

The ability to detect online abnormal events in signals is essential in many real-world signal processing applications. Previous algorithms require an explicit signal statistical model, and interpret abnormal events as statistical model abrupt changes. Corresponding implementation relies on maximum likelihood or on Bayes estimation theory with generally excellent performance. However, there are numerous cases where a robust and tractable model cannot be obtained, and model-free approaches need to be considered. In this paper, we investigate a machine learning, descriptor-based approach that does not require an explicit descriptors statistical model, based on support vector novelty detection. A sequential optimization algorithm is introduced. Theoretical considerations as well as simulations on real signals demonstrate its practical efficiency.


international conference on machine learning | 2007

A dependence maximization view of clustering

Le Song; Alexander J. Smola; Arthur Gretton; Karsten M. Borgwardt

We propose a family of clustering algorithms based on the maximization of dependence between the input variables and their cluster labels, as expressed by the Hilbert-Schmidt Independence Criterion (HSIC). Under this framework, we unify the geometric, spectral, and statistical dependence views of clustering, and subsume many existing algorithms as special cases (e.g. k-means and spectral clustering). Distinctive to our framework is that kernels can also be applied on the labels, which can endow them with particular structures. We also obtain a perturbation bound on the change in k-means clustering.


ieee signal processing workshop on statistical signal processing | 2001

Support vector regression for black-box system identification

Arthur Gretton; Arnaud Doucet; Ralf Herbrich; Peter J. W. Rayner; Bernhard Schölkopf

We demonstrate the use of support vector regression (SVR) techniques for black-box system identification. These methods derive from statistical learning theory, and are of great theoretical and practical interest. We describe the theory underpinning SVR, and compare support vector methods with other approaches using radial basis networks. Finally, we apply SVR to modeling the behaviour of a hydraulic robot arm, and show that SVR improves on previously published results.


Machine Learning | 2010

Temporal kernel CCA and its application in multimodal neuronal data analysis

Felix Bieβmann; Frank C. Meinecke; Arthur Gretton; Alexander Rauch; Gregor Rainer; Nk Logothetis; Klaus-Robert Müller

Data recorded from multiple sources sometimes exhibit non-instantaneous couplings. For simple data sets, cross-correlograms may reveal the coupling dynamics. But when dealing with high-dimensional multivariate data there is no such measure as the cross-correlogram. We propose a simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one. The filters can be used to compute a multivariate extension of the cross-correlogram, the canonical correlogram, between data sources that have different dimensionalities and temporal resolutions. The canonical correlogram reflects the coupling dynamics between the two sources. The temporal filter reveals which features in the data give rise to these couplings and when they do so. We present results from simulations and neuroscientific experiments showing that tkCCA yields easily interpretable temporal filters and correlograms. In the experiments, we simultaneously performed electrode recordings and functional magnetic resonance imaging (fMRI) in primary visual cortex of the non-human primate. While electrode recordings reflect brain activity directly, fMRI provides only an indirect view of neural activity via the Blood Oxygen Level Dependent (BOLD) response. Thus it is crucial for our understanding and the interpretation of fMRI signals in general to relate them to direct measures of neural activity acquired with electrodes. The results computed by tkCCA confirm recent models of the hemodynamic response to neural activity and allow for a more detailed analysis of neurovascular coupling dynamics.


Magnetic Resonance Imaging | 2008

Comparison of pattern recognition methods in classifying high-resolution BOLD signals obtained at high magnetic field in monkeys.

Shih-Pi Ku; Arthur Gretton; Jakob H. Macke; Nk Logothetis

Pattern recognition methods have shown that functional magnetic resonance imaging (fMRI) data can reveal significant information about brain activity. For example, in the debate of how object categories are represented in the brain, multivariate analysis has been used to provide evidence of a distributed encoding scheme [Science 293:5539 (2001) 2425-2430]. Many follow-up studies have employed different methods to analyze human fMRI data with varying degrees of success [Nature reviews 7:7 (2006) 523-534]. In this study, we compare four popular pattern recognition methods: correlation analysis, support-vector machines (SVM), linear discriminant analysis (LDA) and Gaussian naïve Bayes (GNB), using data collected at high field (7 Tesla) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no method performs above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection and outlier elimination.

Collaboration


Dive into the Arthur Gretton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Le Song

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zoltán Szabó

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barnabás Póczos

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge