A Kernel Method for the Two-Sample Problem
Arthur Gretton, Karsten Borgwardt, Malte J. Rasch, Bernhard Scholkopf, Alexander J. Smola
aa r X i v : . [ c s . L G ] M a y Journal of Machine Learning Research 1 (2008) 1-10 Submitted 04/08; Published 04/08
A Kernel Method for the Two-Sample Problem
Arthur Gretton [email protected]
MPI for Biological CyberneticsSpemannstrasse 3872076, T¨ubingen, Germany
Karsten M. Borgwardt ∗ [email protected] University of CambridgeDepartment of EngineeringTrumpington Street, CB2 1PZ Cambridge, United Kingdom
Malte J. Rasch [email protected]
Graz University of TechnologyInffeldgasse 16b/I 8010 Graz, Austria
Bernhard Sch¨olkopf [email protected]
MPI for Biological CyberneticsSpemannstrasse 3872076, T¨ubingen, Germany
Alexander Smola [email protected]
National ICT AustraliaCanberra, ACT 0200, Australia
Editor:
TBA
Abstract
We propose a framework for analyzing and comparing distributions, allowing us to designstatistical tests to determine if two samples are drawn from different distributions. Ourtest statistic is the largest difference in expectations over functions in the unit ball of areproducing kernel Hilbert space (RKHS). We present two tests based on large deviationbounds for the test statistic, while a third is based on the asymptotic distribution of thisstatistic. The test statistic can be computed in quadratic time, although efficient lineartime approximations are available. Several classical metrics on distributions are recoveredwhen the function space used to compute the difference in expectations is allowed to bemore general (eg. a Banach space). We apply our two-sample tests to a variety of problems,including attribute matching for databases using the Hungarian marriage method, wherethey perform strongly. Excellent performance is also obtained when comparing distribu-tions over graphs, for which these are the first such tests.
Keywords:
Kernel methods, two sample test, uniform convergence bounds, schemamatching, asymptotic analysis, hypothesis testing. ∗ . This work was carried out while K.M.B. was with the Ludwig-Maximilians-Universit¨at M¨unchen. c (cid:13) retton, Borgwardt, Rasch, Sch¨olkopf and Smola
1. Introduction
We address the problem of comparing samples from two probability distributions, by propos-ing statistical tests of the hypothesis that these distributions are different (this is called thetwo-sample or homogeneity problem). Such tests have application in a variety of areas. Inbioinformatics, it is of interest to compare microarray data from identical tissue types asmeasured by different laboratories, to detect whether the data may be analysed jointly, orwhether differences in experimental procedure have caused systematic differences in the datadistributions. Equally of interest are comparisons between microarray data from differenttissue types, either to determine whether two subtypes of cancer may be treated as sta-tistically indistinguishable from a diagnosis perspective, or to detect differences in healthyand cancerous tissue. In database attribute matching, it is desirable to merge databasescontaining multiple fields, where it is not known in advance which fields correspond: thefields are matched by maximising the similarity in the distributions of their entries.We test whether distributions p and q are different on the basis of samples drawn fromeach of them, by finding a well behaved (e.g. smooth) function which is large on the pointsdrawn from p , and small (as negative as possible) on the points from q . We use as our teststatistic the difference between the mean function values on the two samples; when this islarge, the samples are likely from different distributions. We call this statistic the MaximumMean Discrepancy (MMD).Clearly the quality of the MMD as a statistic depends on the class F of smooth functionsthat define it. On one hand, F must be “rich enough” so that the population MMDvanishes if and only if p = q . On the other hand, for the test to be consistent, F needsto be “restrictive” enough for the empirical estimate of MMD to converge quickly to itsexpectation as the sample size increases. We shall use the unit balls in universal reproducingkernel Hilbert spaces (Steinwart, 2001) as our function classes, since these will be shown tosatisfy both of the foregoing properties (we also review classical metrics on distributions,namely the Kolmogorov-Smirnov and Earth-Mover’s distances, which are based on differentfunction classes). On a more practical note, the MMD has a reasonable computational cost,when compared with other two-sample tests: given m points sampled from p and n from q , the cost is O ( m + n ) time. We also propose a less statistically efficient algorithmwith a computational cost of O ( m + n ), which can yield superior performance at a givencomputational cost by looking at a larger volume of data.We define three non-parametric statistical tests based on the MMD. The first two, whichuse distribution-independent uniform convergence bounds, provide finite sample guaranteesof test performance, at the expense of being conservative in detecting differences between p and q . The third test is based on the asymptotic distribution of the MMD, and isin practice more sensitive to differences in distribution at small sample sizes. The presentwork synthesizes and expands on results of Gretton et al. (2007a,b), Smola et al. (2007), andSong et al. (2008) who in turn build on the earlier work of Borgwardt et al. (2006). Notethat the latter addresses only the third kind of test, and that the approach of Gretton et al.(2007a,b) employs a more accurate approximation to the asymptotic distribution of the teststatistic.
1. In particular, most of the proofs here were not provided by Gretton et al. (2007a) Kernel Method for the Two-Sample Problem
We begin our presentation in Section 2 with a formal definition of the MMD, and aproof that the population MMD is zero if and only if p = q when F is the unit ball of auniversal RKHS. We also review alternative function classes for which the MMD defines ametric on probability distributions. In Section 3, we give an overview of hypothesis testingas it applies to the two-sample problem, and review other approaches to this problem. Wepresent our first two hypothesis tests in Section 4, based on two different bounds on thedeviation between the population and empirical MMD. We take a different approach inSection 5, where we use the asymptotic distribution of the empirical MMD estimate as thebasis for a third test. When large volumes of data are available, the cost of computing theMMD (quadratic in the sample size) may be excessive: we therefore propose in Section 6a modified version of the MMD statistic that has a linear cost in the number of samples,and an associated asymptotic test. In Section 7, we provide an overview of methods re-lated to the MMD in the statistics and machine learning literature. Finally, in Section 8,we demonstrate the performance of MMD-based two-sample tests on problems from neu-roscience, bioinformatics, and attribute matching using the Hungarian marriage method.Our approach performs well on high dimensional data with low sample size; in addition, weare able to successfully distinguish distributions on graph data, for which ours is the firstproposed test.
2. The Maximum Mean Discrepancy
In this section, we present the maximum mean discrepancy (MMD), and describe conditionsunder which it is a metric on the space of probability distributions. The MMD is defined interms of particular function spaces that witness the difference in distributions: we thereforebegin in Section 2.1 by introducing the MMD for some arbitrary function space. In Section2.2, we compute both the population MMD and two empirical estimates when the associatedfunction space is a reproducing kernel Hilbert space, and we derive the RKHS function thatwitnesses the MMD for a given pair of distributions in Section 2.3. Finally, we describe theMMD for more general function classes in Section 2.4.
Our goal is to formulate a statistical test that answers the following question:
Problem 1
Let p and q be Borel probability measures defined on a domain X . Givenobservations X := { x , . . . , x m } and Y := { y , . . . , y n } , drawn independently and identicallydistributed (i.i.d.) from p and q , respectively, can we decide whether p = q ? To start with, we wish to determine a criterion that, in the population setting, takes on aunique and distinctive value only when p = q . It will be defined based on Lemma 9.3.2 ofDudley (2002). Lemma 1
Let ( X , d ) be a metric space, and let p, q be two Borel probability measures definedon X . Then p = q if and only if E x ∼ p ( f ( x )) = E y ∼ q ( f ( y )) for all f ∈ C ( X ) , where C ( X ) isthe space of bounded continuous functions on X . Although C ( X ) in principle allows us to identify p = q uniquely, it is not practical to workwith such a rich function class in the finite sample setting. We thus define a more general retton, Borgwardt, Rasch, Sch¨olkopf and Smola class of statistic, for as yet unspecified function classes F , to measure the disparity between p and q (Fortet and Mourier, 1953; M¨uller, 1997). Definition 2
Let F be a class of functions f : X → R and let p, q, X, Y be defined as above.We define the maximum mean discrepancy (MMD) as MMD [ F , p, q ] := sup f ∈ F ( E x ∼ p [ f ( x )] − E y ∼ q [ f ( y )]) . (1) M¨uller (1997) calls this an integral probability metric. A biased empirical estimate of theMMD is
MMD b [ F , X, Y ] := sup f ∈ F m m X i =1 f ( x i ) − n n X i =1 f ( y i ) ! . (2)The empirical MMD defined above has an upward bias (we will define an unbiased statisticin the following section). We must now identify a function class that is rich enough touniquely identify whether p = q , yet restrictive enough to provide useful finite sampleestimates (the latter property will be established in subsequent sections). If F is the unit ball in a reproducing kernel Hilbert space H , the empirical MMD can becomputed very efficiently. This will be the main approach we pursue in the present study.Other possible function classes F are discussed at the end of this section. We will refer to H as universal whenever H , defined on a compact metric space X and with associated kernel k : X → R , is dense in C ( X ) with respect to the L ∞ norm. It is shown in Steinwart (2001)that Gaussian and Laplace kernels are universal. We have the following result: Theorem 3
Let F be a unit ball in a universal RKHS H , defined on the compact metricspace X , with associated kernel k ( · , · ) . Then MMD [ F , p, q ] = 0 if and only if p = q . Proof
It is clear that MMD [ F , p, q ] is zero if p = q . We prove the converse by showingthat MMD [ C ( X ) , p, q ] = D for some D > F , p, q ] >
0: this is equivalent toMMD [ F , p, q ] = 0 implying MMD [ C ( X ) , p, q ] = 0 (where this last result implies p = q byLemma 1, noting that compactness of the metric space X implies its separability). Let H bethe universal RKHS of which F is the unit ball. If MMD [ C ( X ) , p, q ] = D , then there existssome ˜ f ∈ C ( X ) for which E p h ˜ f i − E q h ˜ f i ≥ D/
2. We know that H is dense in C ( X ) withrespect to the L ∞ norm: this means that for ǫ = D/
8, we can find some f ∗ ∈ H satisfying (cid:13)(cid:13)(cid:13) f ∗ − ˜ f (cid:13)(cid:13)(cid:13) ∞ < ǫ . Thus, we obtain (cid:12)(cid:12)(cid:12) E p [ f ∗ ] − E p h ˜ f i(cid:12)(cid:12)(cid:12) < ǫ and consequently | E p [ f ∗ ] − E q [ f ∗ ] | > (cid:12)(cid:12)(cid:12) E p h ˜ f i − E q h ˜ f i(cid:12)(cid:12)(cid:12) − ǫ > D − D = D > . Finally, using k f ∗ k H < ∞ , we have[ E p [ f ∗ ] − E q [ f ∗ ]] / k f ∗ k H ≥ D/ (4 k f ∗ k H ) > , and hence MMD [ F , p, q ] > Kernel Method for the Two-Sample Problem
We now review some properties of H that will allow us to express the MMD in a moreeasily computable form (Sch¨olkopf and Smola, 2002). Since H is an RKHS, the operatorof evaluation δ x mapping f ∈ H to f ( x ) ∈ R is continuous. Thus, by the Riesz represen-tation theorem, there is a feature mapping φ ( x ) from X to R such that f ( x ) = h f, φ ( x ) i H .Moreover, h φ ( x ) , φ ( y ) i H = k ( x, y ), where k ( x, y ) is a positive definite kernel function. Thefollowing lemma is due to Borgwardt et al. (2006). Lemma 4
Denote the expectation of φ ( x ) by µ p := E p [ φ ( x )] (assuming its existence). Then
MMD[ F , p, q ] = sup k f k H ≤ h µ [ p ] − µ [ q ] , f i = k µ [ p ] − µ [ q ] k H . (3) Proof
MMD [ F , p, q ] = " sup k f k H ≤ ( E p [ f ( x )] − E q [ f ( y )]) = " sup k f k H ≤ ( E p [ h φ ( x ) , f i H ] − E q [ h φ ( y ) , f i H ]) = " sup k f k H ≤ h µ p − µ q , f i H = k µ p − µ q k H Given we are in an RKHS, the norm k µ p − µ q k H may easily be computed in terms of kernelfunctions. This leads to a first empirical estimate of the MMD, which is unbiased. Lemma 5
Given x and x ′ independent random variables with distribution p , and y and y ′ independent random variables with distribution q , the population MMD is MMD [ F , p, q ] = E x,x ′ ∼ p (cid:2) k ( x, x ′ ) (cid:3) − E x ∼ p,y ∼ q [ k ( x, y )] + E y,y ′ ∼ q (cid:2) k ( y, y ′ ) (cid:3) . (4) Let Z := ( z , . . . , z m ) be m i.i.d. random variables, where z i := ( x i , y i ) (i.e. we assume m = n ). An unbiased empirical estimate of MMD is MMD u [ F , X, Y ] = 1( m )( m − m X i = j h ( z i , z j ) , (5) which is a one-sample U-statistic with h ( z i , z j ) := k ( x i , x j ) + k ( y i , y j ) − k ( x i , y j ) − k ( x j , y i ) (we define h ( z i , z j ) to be symmetric in its arguments due to requirements that will arise inSection 5). Proof
Starting from the expression for MMD [ F , p, q ] in Lemma 4,MMD [ F , p, q ] = k µ p − µ q k H = h µ p , µ p i H + h µ q , µ q i H − h µ p , µ q i H = E p (cid:10) φ ( x ) , φ ( x ′ ) (cid:11) H + E q (cid:10) φ ( y ) , φ ( y ′ ) (cid:11) H − E p,q h φ ( x ) , φ ( y ) i H ,
2. A sufficient condition for this is k µ p k H < ∞ , which is rearranged as E p [ k ( x, x ′ )] < ∞ , where x and x ′ are independent random variables drawn according to p . In other words, k is a trace class operator withrespect to the measure p . retton, Borgwardt, Rasch, Sch¨olkopf and Smola −6 −4 −2 0 2 4 6−0.4−0.200.20.40.60.81 Witness f for Gauss and Laplace densitiesX P r ob . den s i t y and f fGaussLaplace Figure 1: Illustration of the function maximizing the mean discrepancy in the case wherea Gaussian is being compared with a Laplace distribution. Both distributionshave zero mean and unit variance. The function f that witnesses the MMD hasbeen scaled for plotting purposes, and was computed empirically on the basis of2 × samples, using a Gaussian kernel with σ = 0 . h φ ( x ) , φ ( x ′ ) i H = k ( x, x ′ ); the empirical estimate followsstraightforwardly.The empirical statistic is an unbiased estimate of MMD , although it does not have mini-mum variance, since we are ignoring the cross-terms k ( x i , y i ) of which there are only O ( n ).The minimum variance estimate is almost identical, though (Serfling, 1980, Section 5.1.4).The biased statistic in (2) may also be easily computed following the above reasoning.Substituting the empirical estimates µ [ X ] := m P mi =1 φ ( x i ) and µ [ Y ] := n P ni =1 φ ( y i ) of thefeature space means based on respective samples X and Y , we obtainMMD b [ F , X, Y ] = m m X i,j =1 k ( x i , x j ) − mn m,n X i,j =1 k ( x i , y j ) + 1 n n X i,j =1 k ( y i , y j ) . (6)Intuitively we expect the empirical test statistic MMD[ F , X, Y ], whether biased or un-biased, to be small if p = q , and large if the distributions are far apart. It costs O (( m + n ) )time to compute both statistics.Finally, we note that Harchaoui et al. (2008) recently proposed a modification of thekernel MMD statistic in Lemma 4, by scaling the feature space mean distance using theinverse within-sample covariance operator, thus employing the kernel Fisher discriminant asa statistic for testing homogeneity. This statistic is shown to be related to the χ divergence. It is also instructive to consider the witness f which is chosen by MMD to exhibit themaximum discrepancy between the two distributions. The population f and its empirical Kernel Method for the Two-Sample Problem estimate ˆ f ( x ) are respectively f ( x ) ∝ h φ ( x ) , µ [ p ] − µ [ q ] i = E x ′ ∼ p [ k ( x, x ′ )] − E x ′ ∼ q [ k ( x, x ′ )]ˆ f ( x ) ∝ h φ ( x ) , µ [ X ] − µ [ Y ] i = m P mi =1 k ( x i , x ) − n P ni =1 k ( y i , x ) . This follows from the fact that the unit vector v maximizing h v, x i H in a Hilbert space is v = x/ k x k .We illustrate the behavior of MMD in Figure 1 using a one-dimensional example. Thedata X and Y were generated from distributions p and q with equal means and variances,with p Gaussian and q Laplacian. We chose F to be the unit ball in an RKHS using theGaussian kernel. We observe that the function f that witnesses the MMD — in otherwords, the function maximizing the mean discrepancy in (1) — is smooth, positive wherethe Laplace density exceeds the Gaussian density (at the center and tails), and negativewhere the Gaussian density is larger. Moreover, the magnitude of f is a direct reflection ofthe amount by which one density exceeds the other, insofar as the smoothness constraintpermits it. The definition of the maximum mean discrepancy is by no means limited to RKHS. In fact,any function class F that comes with uniform convergence guarantees and is sufficientlypowerful will enjoy the above properties. Definition 6
Let F be a subset of some vector space. The star S [ F ] of a set F is S [ F ] := { αx | x ∈ F and α ∈ [0 , ∞ ) } Theorem 7
Denote by F the subset of some vector space of functions from X to R for which S [ F ] ∩ C ( X ) is dense in C ( X ) with respect to the L ∞ ( X ) norm. Then MMD [ F , p, q ] = 0 ifand only if p = q .Moreover, under the above conditions MMD[ F , p, q ] is a metric on the space of probabilitydistributions. Whenever the star of F is not dense, MMD is a pseudo-metric space. Proof
The first part of the proof is almost identical to that of Theorem 3 and is thereforeomitted. To see the second part, we only need to prove the triangle inequality. We havesup f ∈ F | E p f − E q f | + sup g ∈ F | E q g − E r g | ≥ sup f ∈ F [ | E p f − E q f | + | E q f − E r | ] ≥ sup f ∈ F | E p f − E r f | . The first part of the theorem establishes that MMD[ F , p, q ] is a metric, since only for p = q do we have MMD[ F , p, q ] = 0.Note that any uniform convergence statements in terms of F allow us immediately to char-acterize an estimator of MMD( F , p, q ) explicitly. The following result shows how (we willrefine this reasoning for the RKHS case in Section 4).
3. According to Dudley (2002, p. 26) a metric d ( x, y ) satisfies the following four properties: symmetry,triangle inequality, d ( x, x ) = 0, and d ( x, y ) = 0 = ⇒ x = y . A pseudo-metric only satisfies the first threeproperties. retton, Borgwardt, Rasch, Sch¨olkopf and Smola Theorem 8
Let δ ∈ (0 , be a confidence level and assume that for some ǫ ( δ, m, F ) thefollowing holds for samples { x , . . . , x m } drawn from p : Pr ( sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p [ f ] − m m X i =1 f ( x i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > ǫ ( δ, m, F ) ) ≤ δ. (7) In this case we have that Pr {| MMD[ F , p, q ] − MMD b [ F , X, Y ] | > ǫ ( δ/ , m, F ) } ≤ δ. (8) Proof
The proof works simply by using convexity and suprema as follows: | MMD[ F , p, q ] − MMD b [ F , X, Y ] | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) sup f ∈ F | E p [ f ] − E q [ f ] | − sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 f ( x i ) − n n X i =1 f ( y i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p [ f ] − E q [ f ] − m m X i =1 f ( x i ) + 1 n n X i =1 f ( y i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p [ f ] − m m X i =1 f ( x i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E q [ f ] − n n X i =1 f ( y i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Bounding each of the two terms via a uniform convergence bound proves the claim.This shows that MMD b [ F , X, Y ] can be used to estimate MMD[ F , p, q ] and that the quantityis asymptotically unbiased. Remark 9 (Reduction to Binary Classification)
Any classifier which maps a set ofobservations { z i , l i } with z i ∈ X on some domain X and labels l i ∈ {± } , for which uniformconvergence bounds exist on the convergence of the empirical loss to the expected loss, canbe used to obtain a similarity measure on distributions — simply assign l i = 1 if z i ∈ X and l i = − for z i ∈ Y and find a classifier which is able to separate the two sets. In this casemaximization of E p [ f ] − E q [ f ] is achieved by ensuring that as many z ∼ p ( z ) as possiblecorrespond to f ( z ) = 1 , whereas for as many z ∼ q ( z ) as possible we have f ( z ) = − .Consequently neural networks, decision trees, boosted classifiers and other objects for whichuniform convergence bounds can be obtained can be used for the purpose of distributioncomparison. For instance, Ben-David et al. (2007, Section 4) use the error of a hyperplaneclassifier to approximate the A -distance between distributions of Kifer et al. (2004). Other function spaces F inspired by the statistics literature can also be considered in definingthe MMD. Indeed, Lemma 1 defines an MMD with F the space of bounded continuous real-valued functions, which is a Banach space with the supremum norm (Dudley, 2002, p.158). We now describe two further metrics on the space of probability distributions, theKolmogorov-Smirnov and Earth Mover’s distances, and their associated function classes. Kernel Method for the Two-Sample Problem
The Kolmogorov-Smirnov (K-S) test is probably one of the most famous two-sample tests instatistics. It works for random variables x ∈ R (or any other set for which we can establisha total order). Denote by F p ( x ) the cumulative distribution function of p and let F X ( x ) beits empirical counterpart, that is F p ( z ) := Pr { x ≤ z for x ∼ p ( x ) } and F X ( z ) := 1 | X | m X i =1 z ≤ x i . It is clear that F p captures the properties of p . The Kolmogorov metric is simply the L ∞ distance k F X − F Y k ∞ for two sets of observations X and Y . Smirnov (1939) showed thatfor p = q the limiting distribution of the empirical cumulative distribution functions satisfieslim m,n →∞ Pr (cid:26)h mnm + n i k F X − F Y k ∞ > x (cid:27) = 2 ∞ X j =1 ( − j − e − j x for x ≥ . (9)This allows for an efficient characterization of the distribution under the null hypothesis H . Efficient numerical approximations to (9) can be found in numerical analysis handbooks(Press et al., 1994). The distribution under the alternative, p = q , however, is unknown.The Kolmogorov metric is, in fact, a special instance of MMD[ F , p, q ] for a certainBanach space (M¨uller, 1997, Theorem 5.2) Proposition 10
Let F be the class of functions X → R of bounded variation
1. Then
MMD[ F , p, q ] = k F p − F q k ∞ . Another class of distance measures on distributions that may be written as an MMD arethe Earth-Mover distances. We assume ( X , d ) is a separable metric space, and define P ( X )to be the space of probability measures on X for which R d ( x, z ) dp ( z ) < ∞ for all p ∈ P ( X )and x ∈ X (these are the probability measures for which E | x | < ∞ when X = R ). We thenhave the following definition (Dudley, 2002, p. 420). Definition 11 (Monge-Wasserstein metric)
Let p ∈ P ( X ) and q ∈ P ( X ) . The Monge-Wasserstein distance is defined as W ( p, q ) := inf µ ∈ M ( p,q ) Z d ( x, y ) dµ ( x, y ) , where M ( p, q ) is the set of joint distributions on X × X with marginals p and q .
4. A function f defined on [ a, b ] is of bounded variation C if the total variation is bounded by C , i.e. thesupremum over all sums X ≤ i ≤ n | f ( x i ) − f ( x i − ) | , where a ≤ x ≤ . . . ≤ x n ≤ b (Dudley, 2002, p. 184). retton, Borgwardt, Rasch, Sch¨olkopf and Smola We may interpret this as the cost (as represented by the metric d ( x, y )) of transferring massdistributed according to p to a distribution in accordance with q , where µ is the movementschedule. In general, a large variety of costs of moving mass from x to y can be used, suchas psychooptical similarity measures in image retrieval (Rubner et al., 2000). The followingtheorem holds (Dudley, 2002, Theorem 11.8.2). Theorem 12 (Kantorovich-Rubinstein)
Let p ∈ P ( X ) and q ∈ P ( X ) , where X isseparable. Then a metric on P ( S ) is defined as W ( p, q ) = k p − q k ∗ L = sup k f k L ≤ (cid:12)(cid:12)(cid:12)(cid:12)Z f d ( p − q ) (cid:12)(cid:12)(cid:12)(cid:12) , where k f k L := sup x = y ∈ X | f ( x ) − f ( y ) | d ( x, y ) is the Lipschitz seminorm for real valued f on X . A simple example of this theorem is as follows (Dudley, 2002, Exercise 1, p. 425).
Example 1
Let X = R with associated d ( x, y ) = | x − y | . Then given f such that k f k L ≤ ,we use integration by parts to obtain (cid:12)(cid:12)(cid:12)(cid:12)Z f d ( p − q ) (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)Z ( F p − F q )( x ) f ′ ( x ) dx (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z | ( F p − F q ) | ( x ) dx, where the maximum is attained for the function g with derivative g ′ = 2 1 F p >F q − (and forwhich k g k L = 1 ). We recover the L distance between distribution functions, W ( P, Q ) = Z | ( F p − F q ) | ( x ) dx. One may further generalize Theorem 12 to the set of all laws P ( X ) on arbitrary metricspaces X (Dudley, 2002, Proposition 11.3.2). Definition 13 (Bounded Lipschitz metric)
Let p and q be laws on a metric space X .Then β ( p, q ) := sup k f k BL ≤ (cid:12)(cid:12)(cid:12)(cid:12)Z f d ( p − q ) (cid:12)(cid:12)(cid:12)(cid:12) is a metric on P ( X ) , where f belongs to the space of bounded Lipschitz functions with norm k f k BL := k f k L + k f k ∞ .
3. Background Material
We now present three background results. First, we introduce the terminology used instatistical hypothesis testing. Second, we demonstrate via an example that even for testswhich have asymptotically no error, one cannot guarantee performance at any fixed samplesize without making assumptions about the distributions. Finally, we briefly review someearlier approaches to the two-sample problem.
5. A seminorm satisfies the requirements of a norm besides k x k = 0 only for x = 0 (Dudley, 2002, p. 156). Kernel Method for the Two-Sample Problem
Having described a metric on probability distributions (the MMD) based on distances be-tween their Hilbert space embeddings, and empirical estimates (biased and unbiased) ofthis metric, we now address the problem of determining whether the empirical MMD showsa statistically significant difference between distributions. To this end, we briefly describethe framework of statistical hypothesis testing as it applies in the present context, followingCasella and Berger (2002, Chapter 8). Given i.i.d. samples X ∼ p of size m and Y ∼ q ofsize n , the statistical test, T ( X, Y ) : X m × X n
7→ { , } is used to distinguish between thenull hypothesis H : p = q and the alternative hypothesis H : p = q . This is achieved bycomparing the test statistic MMD[ F , X, Y ] with a particular threshold: if the threshold isexceeded, then the test rejects the null hypothesis (bearing in mind that a zero populationMMD indicates p = q ). The acceptance region of the test is thus defined as the set of realnumbers below the threshold. Since the test is based on finite samples, it is possible that anincorrect answer will be returned: we define the Type I error as the probability of rejecting p = q based on the observed sample, despite the null hypothesis having generated the data.Conversely, the Type II error is the probability of accepting p = q despite the underlyingdistributions being different. The level α of a test is an upper bound on the Type I error:this is a design parameter of the test, and is used to set the threshold to which we comparethe test statistic (finding the test threshold for a given α is the topic of Sections 4 and 5).A consistent test achieves a level α , and a Type II error of zero, in the large sample limit.We will see that the tests proposed in this paper are consistent. Even if a test is consistent, it is not possible to distinguish distributions with high probabilityat a given, fixed sample size (i.e., to provide guarantees on the Type II error), without priorassumptions as to the nature of the difference between p and q . This is true regardless of the two-sample test used. There are several ways to illustrate this, which each givedifferent insight into the kinds of differences that might be undetectable for a given numberof samples. The following example is one such illustration. Example 2
Assume that we have a distribution p from which we draw m iid observa-tions. Moreover, we construct a distribution q by drawing m iid observations from p andsubsequently defining a discrete distribution over these m instances with probability m − each. It is easy to check that if we now draw m observations from q , there is at least a (cid:0) m m (cid:1) m ! m m > − e − > . probability that we thereby will have effectively obtained an m sample from p . Hence no test will be able to distinguish samples from p and q in this case.We could make the probability of detection arbitrarily small by increasing the size of thesample from which we construct q .
6. This may be biased or unbiased.7. This is a variation of a construction for independence tests, which was suggested in a private communi-cation by John Langford. retton, Borgwardt, Rasch, Sch¨olkopf and Smola We next give a brief overview of some earlier approaches to the two sample problem formultivariate data. Since our later experimental comparison is with respect to certain of thesemethods, we give abbreviated algorithm names in italics where appropriate: these shouldbe used as a key to the tables in Section 8. A generalisation of the Wald-Wolfowitz runstest to the multivariate domain was proposed and analysed by Friedman and Rafsky (1979);Henze and Penrose (1999) (FR Wolf ) , and involves counting the number of edges in theminimum spanning tree over the aggregated data that connect points in X to points in Y .The resulting test relies on the asymptotic normality of the test statistic, and this quantityis not distribution-free under the null hypothesis for finite samples (it depends on p and q ).The computational cost of this method using Kruskal’s algorithm is O (( m + n ) log( m + n )),although more modern methods improve on the log( m + n ) term. See Chazelle (2000)for details. Friedman and Rafsky (1979) claim that calculating the matrix of distances,which costs O (( m + n ) ), dominates their computing time; we return to this point in ourexperiments (Section 8). Two possible generalisations of the Kolmogorov-Smirnov testto the multivariate case were studied in (Bickel, 1969; Friedman and Rafsky, 1979). Theapproach of Friedman and Rafsky (FR Smirnov) in this case again requires a minimalspanning tree, and has a similar cost to their multivariate runs test.A more recent multivariate test was introduced by Rosenbaum (2005). This entailscomputing the minimum distance non-bipartite matching over the aggregate data, and usingthe number of pairs containing a sample from both X and Y as a test statistic. The resultingstatistic is distribution-free under the null hypothesis at finite sample sizes, in which respectit is superior to the Friedman-Rafsky test; on the other hand, it costs O (( m + n ) ) tocompute. Another distribution-free test (Hall) was proposed by Hall and Tajvidi (2002):for each point from p , it requires computing the closest points in the aggregated data, andcounting how many of these are from q (the procedure is repeated for each point from q with respect to points from p ). As we shall see in our experimental comparisons, the teststatistic is costly to compute; Hall and Tajvidi (2002) consider only tens of points in theirexperiments.Yet another approach is to use some distance (e.g. L or L ) between Parzen windowestimates of the densities as a test statistic (Anderson et al., 1994; Biau and Gyorfi, 2005),based on the asymptotic distribution of this distance given p = q . When the L norm isused, the test statistic is related to those we present here, although it is arrived at froma different perspective. Briefly, the test of Anderson et al. (1994) is obtained in a morerestricted setting where the RKHS kernel is an inner product between Parzen windows.Since we are not doing density estimation, however, we need not decrease the kernel widthas the sample grows. In fact, decreasing the kernel width reduces the convergence rateof the associated two-sample test, compared with the ( m + n ) − / rate for fixed kernels.We provide more detail in Section 7.1. The L approach of Biau and Gyorfi (2005) (Biau) requires the space to be partitioned into a grid of bins, which becomes difficult or impossiblefor high dimensional problems. Hence we use this test only for low-dimensional problemsin our experiments. Kernel Method for the Two-Sample Problem
4. Tests Based on Uniform Convergence Bounds
In this section, we introduce two statistical tests of independence which have exact perfor-mance guarantees at finite sample sizes, based on uniform convergence bounds. The first,in Section 4.1, uses the McDiarmid (1989) bound on the biased MMD statistic, and thesecond, in Section 4.2, uses a Hoeffding (1963) bound for the unbiased statistic.
We establish two properties of the MMD, from which we derive a hypothesis test. First, weshow that regardless of whether or not p = q , the empirical MMD converges in probabilityat rate O (( m + n ) − ) to its population value. This shows the consistency of statisticaltests based on the MMD. Second, we give probabilistic bounds for large deviations of theempirical MMD in the case p = q . These bounds lead directly to a threshold for ourfirst hypothesis test. We begin our discussion of the convergence of MMD b [ F , X, Y ] toMMD[ F , p, q ]. Theorem 14
Let p, q, X, Y be defined as in Problem 1, and assume ≤ k ( x, y ) ≤ K . Then Pr n | MMD b [ F , X, Y ] − MMD[ F , p, q ] | > (cid:16) ( K/m ) + ( K/n ) (cid:17) + ǫ o ≤ (cid:16) − ǫ mn K ( m + n ) (cid:17) . See Appendix A.2 for proof. Our next goal is to refine this result in a way that allows usto define a test threshold under the null hypothesis p = q . Under this circumstance, theconstants in the exponent are slightly improved. Theorem 15
Under the conditions of Theorem 14 where additionally p = q and m = n , MMD b [ F , X, Y ] ≤ m − q E p [ k ( x, x ) − k ( x, x ′ )] | {z } B ( F ,p ) + ǫ ≤ (2 K/m ) / | {z } B ( F ,p ) + ǫ, both with probability at least − exp (cid:16) − ǫ m K (cid:17) (see Appendix A.3 for the proof ). In this theorem, we illustrate two possible bounds B ( F , p ) and B ( F , p ) on the bias in theempirical estimate (6). The first inequality is interesting inasmuch as it provides a linkbetween the bias bound B ( F , p ) and kernel size (for instance, if we were to use a Gaussiankernel with large σ , then k ( x, x ) and k ( x, x ′ ) would likely be close, and the bias small).In the context of testing, however, we would need to provide an additional bound to showconvergence of an empirical estimate of B ( F , p ) to its population equivalent. Thus, in thefollowing test for p = q based on Theorem 15, we use B ( F , p ) to bound the bias. Corollary 16
A hypothesis test of level α for the null hypothesis p = q , that is, for MMD[ F , p, q ] = 0 , has the acceptance region MMD b [ F , X, Y ] < p K/m (cid:16) p α − (cid:17) . We emphasise that Theorem 14 guarantees the consistency of the test, and that the TypeII error probability decreases to zero at rate O ( m − ), assuming m = n . To put this con-vergence rate in perspective, consider a test of whether two normal distributions have equal
8. Note that we use a tighter bias bound than Gretton et al. (2007a). retton, Borgwardt, Rasch, Sch¨olkopf and Smola means, given they have unknown but equal variance (Casella and Berger, 2002, Exercise8.41). In this case, the test statistic has a Student- t distribution with n + m − µ [ p ] and the empirical means µ [ X ] in a completely analogous fashion. The proof requiressymmetrization by means of a ghost sample , i.e. a second set of observations drawn fromthe same distribution. While not the key focus of the present paper, such bounds can beused in the design of inference principles based on moment matching (Altun and Smola,2006; Dud´ık and Schapire, 2006; Dud´ık et al., 2004). While the previous bounds are of interest since the proof strategy can be used for generalfunction classes with well behaved Rademacher averages, a much easier approach may beused directly on the unbiased statistic MMD u in Lemma 5. We base our test on the followingtheorem, which is a straightforward application of the large deviation bound on U-statisticsof Hoeffding (1963, p. 25). Theorem 17
Assume ≤ k ( x i , x j ) ≤ K , from which it follows − K ≤ h ( z i , z j ) ≤ K .Then Pr (cid:8) MMD u ( F , X, Y ) − MMD ( F , p, q ) > t (cid:9) ≤ exp (cid:18) − t m K (cid:19) where m := ⌊ m/ ⌋ (the same bound applies for deviations of − t and below). A consistent statistical test for p = q using MMD u is then obtained. Corollary 18
A hypothesis test of level α for the null hypothesis p = q has the acceptanceregion MMD u < (4 K/ √ m ) p log( α − ) . We now compare the thresholds of the two tests. We note first that the threshold for thebiased statistic applies to an estimate of MMD, whereas that for the unbiased statisticis for an estimate of MMD . Squaring the former threshold to make the two quantitiescomparable, the squared threshold in Corollary 16 decreases as m − , whereas the thresholdin Corollary 18 decreases as m − / . Thus for sufficiently large m , the McDiarmid-basedthreshold will be lower (and the associated test statistic is in any case biased upwards), andits Type II error will be better for a given Type I bound. This is confirmed in our Section8 experiments. Note, however, that the rate of convergence of the squared, biased MMDestimate to its population value remains at 1 / √ m (bearing in mind we take the square ofa biased estimate, where the bias term decays as 1 / √ m ).Finally, we note that the bounds we obtained here are rather conservative for a numberof reasons: first, they do not take the actual distributions into account. In fact, theyare finite sample size, distribution free bounds that hold even in the worst case scenario.The bounds could be tightened using localization, moments of the distribution, etc. Anysuch improvements could be plugged straight into Theorem 8 for a tighter bound. See e.g.Bousquet et al. (2005) for a detailed discussion of recent uniform convergence bounding
9. In the case of α = 0 .
05, this is m ≥ Kernel Method for the Two-Sample Problem methods. Second, in computing bounds rather than trying to characterize the distributionof MMD( F , X, Y ) explicitly, we force our test to be conservative by design. In the followingwe aim for an exact characterization of the asymptotic distribution of MMD( F , X, Y ) insteadof a bound. While this will not satisfy the uniform convergence requirements, it leads tosuperior tests in practice.
5. Test Based on the Asymptotic Distribution of the Unbiased Statistic
We now propose a third test, which is based on the asymptotic distribution of the unbiasedestimate of MMD in Lemma 5. Theorem 19
We assume E (cid:0) h (cid:1) < ∞ . Under H , MMD u converges in distribution (seee.g. Grimmet and Stirzaker, 2001, Section 7.2) to a Gaussian according to m (cid:0) MMD u − MMD [ F , p, q ] (cid:1) D → N (cid:0) , σ u (cid:1) , where σ u = 4 (cid:16) E z (cid:2) ( E z ′ h ( z, z ′ )) (cid:3) − (cid:2) E z,z ′ ( h ( z, z ′ )) (cid:3) (cid:17) , uniformly at rate / √ m (Serfling,1980, Theorem B, p. 193). Under H , the U-statistic is degenerate, meaning E z ′ h ( z, z ′ ) =0 . In this case, MMD u converges in distribution according to m MMD u D → ∞ X l =1 λ l (cid:2) z l − (cid:3) , (10) where z l ∼ N (0 , i.i.d., λ i are the solutions to the eigenvalue equation Z X ˜ k ( x, x ′ ) ψ i ( x ) dp ( x ) = λ i ψ i ( x ′ ) , and ˜ k ( x i , x j ) := k ( x i , x j ) − E x k ( x i , x ) − E x k ( x, x j ) + E x,x ′ k ( x, x ′ ) is the centred RKHSkernel. The asymptotic distribution of the test statistic under H is given by Serfling (1980,Section 5.5.1), and the distribution under H follows Serfling (1980, Section 5.5.2) andAnderson et al. (1994, Appendix); see Appendix B.1 for details. We illustrate the MMDdensity under both the null and alternative hypotheses by approximating it empirically forboth p = q and p = q . Results are plotted in Figure 2.Our goal is to determine whether the empirical test statistic MMD u is so large as tobe outside the 1 − α quantile of the null distribution in (10) (consistency of the resultingtest is guaranteed by the form of the distribution under H ). One way to estimate thisquantile is using the bootstrap on the aggregated data, following Arcones and Gin´e (1992).Alternatively, we may approximate the null distribution by fitting Pearson curves to its firstfour moments (Johnson et al., 1994, Section 18.8). Taking advantage of the degeneracy ofthe U-statistic, we obtain (see Appendix B.2) E (cid:16)(cid:2) MMD u (cid:3) (cid:17) = 2 m ( m − E z,z ′ (cid:2) h ( z, z ′ ) (cid:3) and (11) E (cid:16)(cid:2) MMD u (cid:3) (cid:17) = 8( m − m ( m − E z,z ′ (cid:2) h ( z, z ′ ) E z ′′ (cid:0) h ( z, z ′′ ) h ( z ′ , z ′′ ) (cid:1)(cid:3) + O ( m − ) . (12) retton, Borgwardt, Rasch, Sch¨olkopf and Smola −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.101020304050 Empirical MMD density under H0MMD P r ob . den s i t y Empirical MMD density under H1MMD P r ob . den s i t y Figure 2:
Left:
Empirical distribution of the MMD under H , with p and q both Gaussianswith unit standard deviation, using 50 samples from each. Right:
Empiricaldistribution of the MMD under H , with p a Laplace distribution with unitstandard deviation, and q a Laplace distribution with standard deviation 3 √ E (cid:16)(cid:2) MMD u (cid:3) (cid:17) is not computed, since it is both very small, O ( m − ),and expensive to calculate, O ( m ). Instead, we replace the kurtosis with a lower bounddue to Wilkins (1944), kurt (cid:0) MMD u (cid:1) ≥ (cid:0) skew (cid:0) MMD u (cid:1)(cid:1) + 1.Note that MMD u may be negative, since it is an unbiased estimator of (MMD[ F , p, q ]) .However, the only terms missing to ensure nonnegativity are the terms h ( z i , z i ), which wereremoved to remove spurious correlations between observations. Consequently we have thebound MMD u + 1 m ( m − m X i =1 k ( x i , x i ) + k ( y i , y i ) − k ( x i , y i ) ≥ . (13)
6. A Linear Time Statistic and Test
While the above tests are already more efficient than the O ( m log m ) and O ( m ) testsdescribed earlier, it is still desirable to obtain O ( m ) tests which do not sacrifice too muchstatistical power. Moreover, we would like to obtain tests which have O (1) storage require-ments for computing the test statistic in order to apply it to data streams. We now describehow to achieve this by computing the test statistic based on a subsampling of the terms inthe sum. The empirical estimate in this case is obtained by drawing pairs from X and Y respectively without replacement.
10. The kurtosis is defined in terms of the fourth and second moments as kurt ` MMD u ´ = E “ [ MMD u ] ”h E “ [ MMD u ] ”i − Kernel Method for the Two-Sample Problem
Lemma 20
Recall m := ⌊ m/ ⌋ . The estimator MMD l [ F , X, Y ] := 1 m m X i =1 h (( x i − , y i − ) , ( x i , y i )) can be computed in linear time. Moreover, it is an unbiased estimate of MMD [ F , p, q ] . While it is expected (as we will see explicitly later) that MMD l has higher variance thanMMD u , it is computationally much more appealing. In particular, the statistic can be usedin stream computations with need for only O (1) memory, whereas MMD u requires O ( m )storage and O ( m ) time to compute the kernel h on all interacting pairs.Since MMD l is just the average over a set of random variables, Hoeffding’s boundand the central limit theorem readily allow us to provide both uniform convergence andasymptotic statements for it with little effort. The first follows directly from Hoeffding(1963, Theorem 2). Theorem 21
Assume ≤ k ( x i , x j ) ≤ K . Then Pr (cid:8) MMD l ( F , X, Y ) − MMD ( F , p, q ) > t (cid:9) ≤ exp (cid:18) − t m K (cid:19) where m := ⌊ m/ ⌋ (the same bound applies for deviations of − t and below). Note that the bound of Theorem 17 is identical to that of Theorem 21, which shows theformer is rather loose. Next we invoke the central limit theorem.
Corollary 22
Assume < E (cid:0) h (cid:1) < ∞ . Then MMD l converges in distribution to aGaussian according to m (cid:0) MMD l − MMD [ F , p, q ] (cid:1) D → N (cid:0) , σ l (cid:1) , where σ l = 2 h E z,z ′ h ( z, z ′ ) − (cid:2) E z,z ′ h ( z, z ′ ) (cid:3) i , uniformly at rate / √ m . The factor of 2 arises since we are averaging over only ⌊ m/ ⌋ observations. Note thedifference in the variance between Theorem 19 and Corollary 22, namely in the formercase we are interested in the average conditional variance E z Var z ′ [ h ( z, z ′ ) | z ], whereas in thelatter case we compute the full variance Var z,z ′ [ h ( z, z ′ )].We end by noting another potential approach to reducing the computational cost of theMMD, by computing a low rank approximation to the Gram matrix (Fine and Scheinberg,2001; Williams and Seeger, 2001; Smola and Sch¨olkopf, 2000). An incremental computationof the MMD based on such a low rank approximation would require O ( md ) storage and O ( md ) computation (where d is the rank of the approximate Gram matrix which is usedto factorize both matrices) rather than O ( m ) storage and O ( m ) operations. That said, itremains to be determined what effect this approximation would have on the distribution ofthe test statistic under H , and hence on the test threshold. retton, Borgwardt, Rasch, Sch¨olkopf and Smola
7. Similarity Measures Related to MMD
Our main point is to propose a new kernel statistic to test whether two distributions arethe same. However, it is reassuring to observe links to other measures of similarity betweendistributions. L Distance between Parzen Window Estimates
In this section, we demonstrate the connection between our test statistic and the Parzenwindow-based statistic of Anderson et al. (1994). We show that a two-sample test based onParzen windows converges more slowly than an RKHS-based test, also following Anderson et al.(1994). Before proceeding, we motivate this discussion with a short overview of the Parzenwindow estimate and its properties (Silverman, 1986). We assume a distribution p on R d ,which has an associated density function also written p to minimise notation. The Parzenwindow estimate of this density from an i.i.d. sample X of size m isˆ p ( x ) = 1 m m X l =1 κ ( x l − x ) where κ satisfies Z X κ ( x ) dx = 1 and κ ( x ) ≥ . We may rescale κ according to h dm κ (cid:16) xh m (cid:17) . Consistency of the Parzen window estimaterequires lim m →∞ h dm = 0 and lim m →∞ mh dm = ∞ . (14)We now show that the L distance between Parzen windows density estimates (Anderson et al.,1994) is a special case of the biased MMD in equation (6). Denote by D r ( p, q ) := k p − q k r the L r distance. For r = 1 the distance D r ( p, q ) is known as the Levy distance (Feller,1971), and for r = 2 we encounter distance measures derived from the Renyi entropy(Gokcay and Principe, 2002).Assume that ˆ p and ˆ q are given as kernel density estimates with kernel κ ( x − x ′ ), thatis, ˆ p ( x ) = m − P i κ ( x i − x ) and ˆ q ( y ) is defined by analogy. In this case D (ˆ p, ˆ q ) = Z " m X i κ ( x i − z ) − n X i κ ( y i − z ) dz (15)= 1 m m X i,j =1 k ( x i − x j ) + 1 n n X i,j =1 k ( y i − y j ) − mn m,n X i,j =1 k ( x i − y j ) , (16)where k ( x − y ) = R κ ( x − z ) κ ( y − z ) dz . By its definition k ( x − y ) is a Mercer kernel (Mercer,1909), as it can be viewed as inner product between κ ( x − z ) and κ ( y − z ) on the domain X . A disadvantage of the Parzen window interpretation is that when the Parzen windowestimates are consistent (which requires the kernel size to decrease with increasing samplesize), the resulting two-sample test converges more slowly than using fixed kernels. Accord-ing to Anderson et al. (1994, p. 43), the Type II error of the two-sample test convergesas m − / h − d/ m . Thus, given the schedule for the Parzen window size decrease in (14), theconvergence rate will lie in the open interval (0 , / h m Kernel Method for the Two-Sample Problem decreases more slowly, and the lower limit corresponds to h m decreasing near the upperbound of 1 /m . In other words, by avoiding density estimation, we obtain a better con-vergence rate (namely m − / ) than using a Parzen window estimate with any permissiblebandwidth decrease schedule. In addition, the Parzen window interpretation cannot ex-plain the excellent performance of MMD based tests in experimental settings where thedimensionality greatly exceeds the sample size (for instance the Gaussian toy example inFigure 4B, for which performance actually improves when the dimensionality increases; andthe microarray datasets in Table 1). Finally, our tests are able to employ universal kernelsthat cannot be written as inner products between Parzen windows, normalized or otherwise:several examples are given by Steinwart (2001, Section 3) and Micchelli et al. (2006, Section3). We may further generalize to kernels on structured objects such as strings and graphs(Sch¨olkopf et al., 2004): see also our experiments in Section 8. G¨artner et al. (2002) propose kernels to deal with sets of observations. These are then usedin the context of Multi-Instance Classification (MIC). The problem MIC attempts to solveis to find estimators which are able to infer from the fact that some elements in the setsatisfy a certain property, then the set of observations has this property, too. For instance,a dish of mushrooms is poisonous if it contains poisonous mushrooms. Likewise a keyringwill open a door if it contains a suitable key. One is only given the ensemble, however,rather than information about which instance of the set satisfies the property.The solution proposed by G¨artner et al. (2002) is to map the ensembles X i := { x i , . . . , x im i } ,where i is the ensemble index and m i the number of elements in the i th ensemble, jointlyinto feature space via φ ( X i ) := 1 m i m i X j =1 φ ( x ij ) , (17)and use the latter as the basis for a kernel method. This simple approach affords rathergood performance. With the benefit of hindsight, it is now understandable why the kernel k ( X i , X j ) = 1 m i m j m i ,m j X u,v k ( x iu , x jv ) (18)produces useful results: it is simply the kernel between the empirical means in feature space h µ ( X i ) , µ ( X j ) i (Hein et al., 2004, Eq. 4). Jebara and Kondor (2003) later extended thissetting by smoothing the empirical densities before computing inner products.Note, however, that property testing for distributions is probably not optimal whenusing the mean µ [ p ] (or µ [ X ] respectively): we are only interested in determining whether some instances in the domain have the desired property, rather than making a statementregarding the distribution of those instances. Taking this into account leads to an improvedalgorithm (Andrews et al., 2003). We next demonstrate the application of MMD in determining whether two random variables x and y are independent. In other words, assume that pairs of random variables ( x i , y i ) retton, Borgwardt, Rasch, Sch¨olkopf and Smola are jointly drawn from some distribution p := Pr x,y . We wish to determine whether thisdistribution factorizes, i.e. whether q := Pr x Pr y is the same as p . One application ofsuch an independence measure is in independent component analysis (Comon, 1994), wherethe goal is to find a linear mapping of the observations x i to obtain mutually independentoutputs. Kernel methods were employed to solve this problem by Bach and Jordan (2002);Gretton et al. (2005a,b). In the following we re-derive one of the above kernel independencemeasures using mean operators instead.We begin by defining µ [Pr xy ] := E x,y [ v (( x, y ) , · )]and µ [Pr x × Pr y ] := E x E y [ v (( x, y ) , · )] . Here we assumed that V is an RKHS over X × Y with kernel v (( x, y ) , ( x ′ , y ′ )). If x and y are dependent, the equality µ [Pr xy ] = µ [Pr x × Pr y ] will not hold. Hence we may use∆ := k µ [Pr xy ] − µ [Pr x × Pr y ] k as a measure of dependence.Now assume that v (( x, y ) , ( x ′ , y ′ )) = k ( x, x ′ ) l ( y, y ′ ), i.e. that the RKHS V is a directproduct H ⊗ G of the RKHSs on X and Y . In this case it is easy to see that∆ = k E xy [ k ( x, · ) l ( y, · )] − E x [ k ( x, · )] E y [ l ( y, · )] k = E xy E x ′ y ′ (cid:2) k ( x, x ′ ) l ( y, y ′ ) (cid:3) − E x E y E x ′ y ′ (cid:2) k ( x, x ′ ) l ( y, y ′ ) (cid:3) + E x E y E x ′ E y ′ (cid:2) k ( x, x ′ ) l ( y, y ′ ) (cid:3) The latter, however, is exactly what Gretton et al. (2005a) show to be the Hilbert-Schmidtnorm of the cross-covariance operator between RKHSs: this is zero if and only if x and y are independent, for universal kernels. We have the following theorem: Theorem 23
Denote by C xy the covariance operator between random variables x and y ,drawn jointly from Pr xy , where the functions on X and Y are the reproducing kernel Hilbertspaces F and G respectively. Then the Hilbert-Schmidt norm k C xy k HS equals ∆ . Empirical estimates of this quantity are as follows:
Theorem 24
Denote by K and L the kernel matrices on X and Y respectively, and by H = I − /m the projection matrix onto the subspace orthogonal to the vector with allentries set to . Then m − tr HKHL is an estimate of ∆ with bias O ( m − ) . With highprobability the deviation from ∆ is O ( m − ) . Gretton et al. (2005a) provide explicit constants. In certain circumstances, including in thecase of RKHSs with Gaussian kernels, the empirical ∆ may also be interpreted in termsof a smoothed difference between the joint empirical characteristic function (ECF) and theproduct of the marginal ECFs (Feuerverger, 1993; Kankainen, 1995). This interpretationdoes not hold in all cases, however, e.g. for kernels on strings, graphs, and other structuredspaces. An illustration of the witness function f ∈ F from Definition 2 is provided in Figure3. This is a smooth function which has large magnitude where the joint density is mostdifferent from the product of the marginals.We remark that a hypothesis test based on the above kernel statistic is more complicatedthan for the two-sample problem, since the product of the marginal distributions is in effectsimulated by permuting the variables of the original sample. Further details are providedby Gretton et al. (2008). Kernel Method for the Two-Sample Problem X Y Dependence witness and sample −1.5 −1 −0.5 0 0.5 1 1.5−1.5−1−0.500.511.5 −0.04−0.03−0.02−0.0100.010.020.030.040.05
Figure 3: Illustration of the function maximizing the mean discrepancy when MMD is usedas a measure of independence. A sample from dependent random variables x and y is shown in black, and the associated function f that witnesses the MMD isplotted as a contour. The latter was computed empirically on the basis of 200samples, using a Gaussian kernel with σ = 0 . retton, Borgwardt, Rasch, Sch¨olkopf and Smola Shawe-Taylor and Dolia (2007) define a distance between distributions as follows: let H be a set of functions on X and r be a probability distribution over F . Then the distancebetween two distributions p and q is given by D ( p, q ) := E f ∼ r ( f ) | E x ∼ p [ f ( x )] − E x ∼ q [ f ( x )] | . (19)That is, we compute the average distance between p and q with respect to a distribution oftest functions. Lemma 25
Let H be a reproducing kernel Hilbert space, f ∈ H , and assume r ( f ) = r ( k f k H ) with finite E f ∼ r [ k f k H ] . Then D ( p, q ) = C k µ [ p ] − µ [ q ] k H for some constant C which depends only on H and r . Proof
By definition E p [ f ( x )] = h µ [ p ] , f i H . Using linearity of the inner product, Equa-tion (19) equals Z |h µ [ p ] − µ [ q ] , f i H | d r ( f )= k µ [ p ] − µ [ q ] k H Z (cid:12)(cid:12)(cid:12)(cid:12)(cid:28) µ [ p ] − µ [ q ] k µ [ p ] − µ [ q ] k H , f (cid:29) H (cid:12)(cid:12)(cid:12)(cid:12) d r ( f ) , where the integral is independent of p, q . To see this, note that for any p, q , µ [ p ] − µ [ q ] k µ [ p ] − µ [ q ] k H is aunit vector which can turned into, say, the first canonical basis vector by a rotation whichleaves the integral invariant, bearing in mind that r is rotation invariant. An application related to the two sample problem is that of outlier detection: this is thequestion of whether a novel point is generated from the same distribution as a particulari.i.d. sample. In a way, this is a special case of a two sample test, where the second samplecontains only one observation. Several methods essentially rely on the distance between anovel point to the sample mean in feature space to detect outliers.For instance, Davy et al. (2002) use a related method to deal with nonstationary timeseries. Likewise Shawe-Taylor and Cristianini (2004, p. 117) discuss how to detect novelobservations by using the following reasoning: the probability of being an outlier is boundedboth as a function of the spread of the points in feature space and the uncertainty inthe empirical feature space mean (as bounded using symmetrisation and McDiarmid’s tailbound).Instead of using the sample mean and variance, Tax and Duin (1999) estimate the centerand radius of a minimal enclosing sphere for the data, the advantage being that such boundscan potentially lead to more reliable tests for single observations. Sch¨olkopf et al. (2001)show that the minimal enclosing sphere problem is equivalent to novelty detection by meansof finding a hyperplane separating the data from the origin, at least in the case of radialbasis function kernels. Kernel Method for the Two-Sample Problem
8. Experiments
We conducted distribution comparisons using our MMD-based tests on datasets from threereal-world domains: database applications, bioinformatics, and neurobiology. We investi-gated both uniform convergence approaches (MMD b with the Corollary 16 threshold, andMMD u H with the Corollary 18 threshold); the asymptotic approaches with bootstrap(MMD u B) and moment matching to Pearson curves (MMD u M), both described in Sec-tion 5; and the asymptotic approach using the linear time statistic (MMD l ) from Section6. We also compared against several alternatives from the literature (where applicable):the multivariate t-test, the Friedman-Rafsky Kolmogorov-Smirnov generalisation (Smir) ,the Friedman-Rafsky Wald-Wolfowitz generalisation (Wolf ) , the Biau-Gy¨orfi test (Biau) ,and the Hall-Tajvidi test (Hall) . See Section 3.3 for details regarding these tests. Notethat we do not apply the Biau-Gy¨orfi test to high-dimensional problems (since the requiredspace partitioning is no longer possible), and that MMD is the only method applicable tostructured data such as graphs.An important issue in the practical application of the MMD-based tests is the selectionof the kernel parameters. We illustrate this with a Gaussian RBF kernel, where we mustchoose the kernel width σ (we use this kernel for univariate and multivariate data, but notfor graphs). The empirical MMD is zero both for kernel size σ = 0 (where the aggregateGram matrix over X and Y is a unit matrix), and also approaches zero as σ → ∞ (where theaggregate Gram matrix becomes uniformly constant). We set σ to be the median distancebetween points in the aggregate sample, as a compromise between these two extremes: thisremains a heuristic, similar to those described in Takeuchi et al. (2006); Sch¨olkopf (1997),and the optimum choice of kernel size is an ongoing area of research. In our first experiment, we investigated the scaling performance of the various tests as afunction of the dimensionality d of the space X ⊂ R d , when both p and q were Gaussian.We considered values of d up to 2500: the performance of the MMD-based tests cannottherefore be explained in the context of density estimation (as in Section 7.1), since theassociated density estimates are necessarily meaningless here. The levels for all tests wereset at α = 0 . m = 250 samples were used, and results were averaged over 100 repetitions.In the first case, the distributions had different means and unit variance. The percentage oftimes the null hypothesis was correctly rejected over a set of Euclidean distances betweenthe distribution means (20 values logarithmically spaced from 0.05 to 50), was computedas a function of the dimensionality of the normal distributions. In case of the t-test, aridge was added to the covariance estimate, to avoid singularity (the ratio of largest tosmallest eigenvalue was ensured to be at most 2). In the second case, samples were drawnfrom distributions N (0 , I ) and N (0 , σ I ) with different variance. The percentage of nullrejections was averaged over 20 σ values logarithmically spaced from 10 . to 10. Thet-test was not compared in this case, since its output would have been irrelevant. Resultsare plotted in Figure 4.In the case of Gaussians with differing means, we observe the t-test performs best in lowdimensions, however its performance is severely weakened when the number of samples ex-ceeds the number of dimensions. The performance of M M D u M is comparable to the t-test retton, Borgwardt, Rasch, Sch¨olkopf and Smola A B pe r c en t c o rr e c t l y r e j e c t i ng H Normal dist. having different means
MMD b MMD MMMD HMMD l2 t−testFR WolfFR SmirnovHall Figure 4: Type II performance of the various tests when separating two Gaussians, withtest level α = 0 . A Gaussians have same variance and different means. B Gaussians have same mean and different variances.for low sample sizes, and outperforms all other methods for larger sample sizes. The worstperformance is obtained for
M M D u H, though
M M D b also does relatively poorly: thisis unsurprising given that these tests derive from distribution-free large deviation bounds,whereas the sample size is relatively small. Remarkably, M M D l performs quite well com-pared with classical tests in high dimensions.In the case of Gaussians of differing variance, the Hall test performs best, followed closelyby
M M D u . FR Wolf and (to a much greater extent)
FR Smirnov both have difficulties inhigh dimensions, failing completely once the dimensionality becomes too great. The lineartest
M M D l again performs surprisingly well, almost matching the M M D u performance inthe highest dimensionality. Both M M D u H and
M M D b perform poorly, the former failingcompletely: this is one of several illustrations we will encounter of the much greater tightnessof the Corollary 16 threshold over that in Corollary 18. In our next application of MMD, we performed distribution testing for data integration: theobjective being to aggregate two datasets into a single sample, with the understanding thatboth original samples were generated from the same distribution. Clearly, it is important tocheck this last condition before proceeding, or an analysis could detect patterns in the newdataset that are caused by combining the two different source distributions, and not by real-world phenomena. We chose several real-world settings to perform this task: we comparedmicroarray data from normal and tumor tissues (Health status), microarray data fromdifferent subtypes of cancer (Subtype), and local field potential (LFP) electrode recordingsfrom the Macaque primary visual cortex (V1) with and without spike events (Neural Data Iand II, as described in more detail by Rasch et al., 2008). In all cases, the two data sets havedifferent statistical properties, but the detection of these differences is made difficult by the Kernel Method for the Two-Sample Problem high data dimensionality (indeed, for the microarray data, density estimation is impossiblegiven the sample size and data dimensionality, and no successful test can rely on accuratedensity estimates as an intermediate step).We applied our tests to these datasets in the following fashion. Given two datasets Aand B, we either chose one sample from A and the other from B (attributes = different) ; orboth samples from either A or B (attributes = same) . We then repeated this process up to1200 times. Results are reported in Table 1. Our asymptotic tests perform better than allcompetitors besides
Wolf : in the latter case, we have greater Type II error for one neuraldataset, lower Type II error on the Health Status data (which has very high dimensionand low sample size), and identical (error-free) performance on the remaining examples.We note that the Type I error of the bootstrap test on the Subtype dataset is far from itsdesign value of 0 .
05, indicating that the Pearson curves provide a better threshold estimatefor these low sample sizes. For the remaining datasets, the Type I errors of the Pearsonand Bootstrap approximations are close. Thus, for larger datasets, the bootstrap is to bepreferred, since it costs O ( m ), compared with a cost of O ( m ) for Pearson (due to the costof computing (12)). Finally, the uniform convergence-based tests are too conservative, withMMD b finding differences in distribution only for the data with largest sample size, andMMD u H never finding differences.
Dataset Attr. MMD b MMD u H MMD u B MMD u M t-test Wolf Smir HallNeural Data I Same 100.0 100.0 96.5 96.5 100.0 97.0 95.0 96.0Different 38.0 100.0
Table 1: Distribution testing for data integration on multivariate data. Numbers indicatethe percentage of repetitions for which the null hypothesis (p=q) was accepted,given α = 0 .
05. Sample size (dimension; repetitions of experiment): Neural I 4000(63; 100) ; Neural II 1000 (100; 1200); Health Status 25 (12,600; 1000); Subtype25 (2,118; 1000).
We next investigate the tradeoff between computational cost and performance of the varioustests, with particular attention to how the quadratic time MMD tests from Sections 4 and 5compare with the linear time MMD-based asymptotic test from Section 6. We consider two1-D datasets (CNUM and FOREST) and two higher-dimensional datasets (FOREST10Dand NEUROII). Results are plotted in Figure 5. If cost is not a factor, then the MMD u Bshows best overall performance as a function of sample size, with a Type II error droppingto zero as fast or faster than competing approaches in three of four cases, and narrowly retton, Borgwardt, Rasch, Sch¨olkopf and Smola trailing FR Wolf in the fourth (FOREST10D). That said, for datasets CNUM, FOREST,and FOREST10D, the linear time MMD achieves results comparable to MMD u B at a farsmaller computational cost, albeit by looking at a great deal more data. In the CNUMcase, however, the linear test is not able to achieve zero error even for the largest data setsize. For the NEUROII data, attaining zero Type II error has about the same cost for bothapproaches. The difference in cost of MMD u B and MMD b is due to the bootstrappingrequired for the former, which produces a constant offset in cost between the two (here 150resamplings were used).The t -test also performs well in three of the four problems, and in fact represents the bestcost-performance tradeoff in these three datasets (i.e. while it requires much more data thanMMD u B for a given level of performance, it costs far less to compute). The t -test assumesthat only the difference in means is important in distinguishing the distributions, and itrequires an accurate estimate of the within-sample covariance; the test fails completelyon the NEUROII data. We emphasise that the Kolmogorov-Smirnov results in 1-D wereobtained using the classical statistic, and not the Friedman-Rafsky statistic, hence the lowcomputational cost. The cost of both Friedman-Rafsky statistics is therefore given by the FR Wolf cost in this case. The latter scales similarly with sample size to the quadratictime MMD tests, confirming Friedman and Rafsky’s observation that obtaining the pairwisedistances between sample points is the dominant cost of their tests. We also remark on theunusual behaviour of the Type II error of the
FR Wolf test in the FOREST dataset, whichworsens for increasing sample size.We conclude that the approach to be recommended when testing homogeneity willdepend on the data available: for small amounts of data, the best results are obtained usingevery observation to maximum effect, and employing the quadratic time MMD u B test.When large volumes of data are available, a better option is to look at each point only once,which can yield greater accuracy for a given computational cost. It may also be worth doinga t-test first in this case, and only running more sophisticated non-parametric tests if thet-test accepts the null hypothesis, to verify the distributions are identical in more than justmean.
Our final series of experiments addresses automatic attribute matching. Given two databases,we want to detect corresponding attributes in the schemas of these databases, based on theirdata-content (as a simple example, two databases might have respective fields Wage andSalary, which are assumed to be observed via a subsampling of a particular population,and we wish to automatically determine that both Wage and Salary denote to the sameunderlying attribute). We use a two-sample test on pairs of attributes from two databasesto find corresponding pairs. This procedure is also called table matching for tables fromdifferent databases. We performed attribute matching as follows: first, the dataset D wassplit into two halves A and B. Each of the n attributes in A (and B, resp.) was then rep-resented by its instances in A (resp. B). We then tested all pairs of attributes from A and
11. Note that corresponding attributes may have different distributions in real-world databases. Hence,schema matching cannot solely rely on distribution testing. Advanced approaches to schema matchingusing MMD as one key statistical test are a topic of current research. Kernel Method for the Two-Sample Problem −3 −2 −1 T y pe II e rr o r Sample −4 −2 T i m e pe r t e s t [ s e c ] Sample
MMD b MMD u2 BMMD t−testFR WolfFR Smir −3 −2 −1 T y pe II e rr o r Sample −4 −2 T i m e pe r t e s t [ s e c ] Sample −3 −2 −1 T y pe II e rr o r Sample −4 −2 T i m e pe r t e s t [ s e c ] Sample −3 −2 −1 T y pe II e rr o r Sample −4 −2 T i m e pe r t e s t [ s e c ] Sample
Figure 5: Linear vs quadratic MMD. First column is performance, second is runtime. Thedashed grey horizontal line indicates zero Type II error (required due log y-axis) retton, Borgwardt, Rasch, Sch¨olkopf and Smola from B against each other, to find the optimal assignment of attributes A , . . . , A n from Ato attributes B , . . . , B n from B . We assumed that A and B contain the same number ofattributes.As a naive approach, one could assume that any possible pair of attributes might cor-respond, and thus that every attribute of A needs to be tested against all the attributes of B to find the optimal match. We report results for this naive approach, aggregated overall pairs of possible attribute matches, in Table 2. We used three datasets: the censusincome dataset from the UCI KDD archive (CNUM), the protein homology dataset fromthe 2004 KDD Cup (BIO) (Caruana and Joachims, 2004), and the forest dataset from theUCI ML archive (Blake and Merz, 1998). For the final dataset, we performed univariatematching of attributes (FOREST) and multivariate matching of tables (FOREST10D) fromtwo different databases, where each table represents one type of forest. Both our asymptoticMMD u -based tests perform as well as or better than the alternatives, notably for CNUM,where the advantage of MMD u is large. Unlike in Table 1, the next best alternatives are notconsistently the same across all data: e.g. in BIO they are Wolf or Hall , whereas in FOR-EST they are
Smir , Biau , or the t-test. Thus, MMD u appears to perform more consistentlyacross the multiple datasets. The Friedman-Rafsky tests do not always return a Type Ierror close to the design parameter: for instance, Wolf has a Type I error of 9.7% on theBIO dataset (on these data, MMD u has the joint best Type II error without compromisingthe designed Type I performance). Finally, MMD b performs much better than in Table 1,although surprisingly it fails to reliably detect differences in FOREST10D. The results ofMMD u H are also improved, although it remains among the worst performing methods.A more principled approach to attribute matching is also possible. Assume that φ ( A ) =( φ ( A ) , φ ( A ) , ..., φ n ( A n )): in other words, the kernel decomposes into kernels on theindividual attributes of A (and also decomposes this way on the attributes of B). In thiscase, M M D can be written P ni =1 k µ i ( A i ) − µ i ( B i ) k , where we sum over the MMD termson each of the attributes. Our goal of optimally assigning attributes from B to attributesof A via MMD is equivalent to finding the optimal permutation π of attributes of B thatminimizes P ni =1 k µ i ( A i ) − µ i ( B π ( i ) ) k . If we define C ij = k µ i ( A i ) − µ i ( B j ) k , then this isthe same as minimizing the sum over C i,π ( i ) . This is the linear assignment problem, whichcosts O ( n ) time using the Hungarian method (Kuhn, 1955).While this may appear to be a crude heuristic, it nonetheless defines a semi-metric onthe sample spaces X and Y and the corresponding distributions p and q . This followsfrom the fact that matching distances are proper metrics if the matching cost functions aremetrics. We formalize this as follows: Theorem 26
Let p, q be distributions on R d and denote by p i , q i the marginal distribu-tions on the i -th variable. Moreover, denote by Π the symmetric group on { , . . . , d } . Thefollowing distance, obtained by optimal coordinate matching, is a semi-metric. ∆[ F , p, q ] := min π ∈ Π d X i =1 MMD[ F , p i , q π ( i ) ] Proof
Clearly ∆[ F , p, q ] is nonnegative, since all of its summands are. Next we show thetriangle inequality. Denote by r a third distribution on R d and let π p,q , π q,r and π p,r be the Kernel Method for the Two-Sample Problem distance minimizing permutations between p, q and r respectively. It then follows that∆[ F , p, q ] + ∆[ F , q, r ] = d X i =1 MMD[ F , p i , q π p,q ( i ) ] + d X i =1 MMD[ F , q i , r π q,r ( i ) ] ≥ d X i =1 MMD[ F , p i , r [ π p,q ◦ π q,r ]( i ) ] ≥ ∆[ F , p, r ] . Here the first inequality follows from the triangle inequality on MMD, that isMMD[ F , p i , q π p,q ( i ) ] + MMD[ F , q π p,q ( i ) , r [ π p,q ◦ π q,r ]( i ) ] ≥ MMD[ F , p i , r [ π p,q ◦ π q,r ]( i ) ] . The second inequality is a result of minimization over π . Dataset Attr. MMD b MMD u H MMD u B MMD u M t-test Wolf Smir Hall BiauBIO Same 100.0 100.0 93.8 94.8 95.2 90.3 95.8 95.3 99.3Different 20.0 52.6
CNUM Same 100.0 100.0 94.5 93.8 94.0 98.4 97.5 91.2 98.5Different 14.9 52.7 2.7
Table 2: Naive attribute matching on univariate (BIO, FOREST, CNUM) and multivariatedata (FOREST10D). Numbers indicate the percentage of accepted null hypothesis(p=q) pooled over attributes. α = 0 .
05. Sample size (dimension; attributes;repetitions of experiment): BIO 377 (1; 6; 100); FOREST 538 (1; 10; 100); CNUM386 (1; 13; 100); FOREST10D 1000 (10; 2; 100).We tested this ’Hungarian approach’ to attribute matching via MMD u B on threeunivariate datasets (BIO, CNUM, FOREST) and for table matching on a fourth (FOR-EST10D). To study MMD u B on structured data, we obtained two datasets of protein graphs(PROTEINS and ENZYMES) and used the graph kernel for proteins from Borgwardt et al.(2005) for table matching via the Hungarian method (the other tests were not applicableto this graph data). The challenge here is to match tables representing one functional classof proteins (or enzymes) from dataset A to the corresponding tables (functional classes) inB. Results are shown in Table 3. Besides on the BIO and CNUM datasets, MMD u B madeno errors.
9. Conclusion
We have established three simple multivariate tests for comparing two distributions p and q ,based on samples of size m and n from these respective distributions. Our test statistic is the retton, Borgwardt, Rasch, Sch¨olkopf and Smola Dataset Data type No. attributes Sample size Repetitions % correct matchesBIO univariate 6 377 100 90.0CNUM univariate 13 386 100 99.8FOREST univariate 10 538 100 100.0FOREST10D multivariate 2 1000 100 100.0ENZYME structured 6 50 50 100.0PROTEINS structured 2 200 50 100.0Table 3: Hungarian Method for attribute matching via MMD u B on univariate (BIO,CNUM, FOREST), multivariate (FOREST10D), and structured data (EN-ZYMES, PROTEINS) ( α = 0 .
05; ‘% correct matches’ is the percentage of thecorrect attribute matches detected over all repetitions).maximum mean discrepancy (MMD), defined as the maximum deviation in the expectationof a function evaluated on each of the random variables, taken over a sufficiently rich functionclass: in our case, a universal reproducing kernel Hilbert space (RKHS). Equivalently, thestatistic can be written as the norm of the difference between distribution feature meansin the RKHS. We do not require density estimates as an intermediate step. Two of ourtests provide Type I error bounds that are exact and distribution-free for finite samplesizes. We also give a third test based on quantiles of the asymptotic distribution of theassociated test statistic. All three tests can be computed in O (( m + n ) ) time, howeverwhen sufficient data are available, a linear time statistic can be used, which employs moredata to get better results at smaller computational cost. In addition, a number of metricson distributions (Kolmogorov-Smirnov, Earth Mover’s, L distance between Parzen windowdensity estimates), as well as certain kernel similarity measures on distributions, are includedwithin our framework.While our result establishes that statistical tests based on the MMD are consistent foruniversal kernels on compact domains, we draw attention to the recent introduction of char-acteristic kernels by Fukumizu et al. (2008), these being kernels for which the mean mapis injective. Fukumizu et al. establish that Gaussian and Laplace kernels are characteristicon R d , and thus the MMD is a consistent test for this domain. Sriperumbudur et al. (2008)further explore the properties of characteristic kernels, providing a simple condition to de-termine whether convolution kernels are characteristic, and describing characteristic kernelswhich are not universal on compact domains. We also note (following Section 7.2) that theMMD for RKHSs is associated with a particular kernel between probability distributions.Hein et al. (2004) describe several further such kernels, which induce corresponding dis-tances between feature space distribution mappings: these may in turn lead to new andpowerful two-sample tests.Two recent studies have shown that additional divergence measures between distribu-tions can be obtained empirically through optimization in a reproducing kernel Hilbertspace. Harchaoui et al. (2008) build on the work of Gretton et al. (2007a), considering ahomogeneity statistic arising from the kernel Fisher discriminant, rather than the differenceof RKHS means; and Nguyen et al. (2008) obtain a KL divergence estimate by approximat- Kernel Method for the Two-Sample Problem ing the ratio of densities (or its log) with a function in an RKHS. By design, both thesekernel-based statistics prioritise different features of p and q when measuring the divergencebetween them, and the resulting effects on distinguishability of distributions are thereforeof interest.Finally, we have seen in Section 2 that several classical metrics on probability distri-butions can be written as maximum mean discrepancies with function classes that are notHilbert spaces, but rather Banach, metric, or semi-metric spaces. It would be of particularinterest to establish under what conditions one could write these discrepancies in terms ofnorms of differences of mean elements. In particular, Der and Lee (2007) consider Banachspaces endowed with a semi-inner product, for which a General Riesz Representation existsfor elements in the dual. Appendix A. Large Deviation Bounds for Tests with Finite SampleGuarantees
A.1 Preliminary Definitions and Theorems
We need the following theorem, due to McDiarmid (1989).
Theorem 27 (McDiarmid’s inequality)
Let f : X m → R be a function such that forall i ∈ { , . . . , m } , there exist c i < ∞ for which sup X ∈ X m , ˜ x ∈ X | f ( x , . . . x m ) − f ( x , . . . x i − , ˜ x, x i +1 , . . . , x m ) | ≤ c i . Then for all probability measures p and every ǫ > , p x m ( f ( x ) − E x m ( f ( x )) > t ) < exp (cid:18) − ǫ P mi =1 c i (cid:19) . We also define the Rademacher average of the function class F with respect to the m -sample X . Definition 28 (Rademacher average of F on X ) Let F be the unit ball in a universalRKHS on the compact domain X , with kernel bounded according to ≤ k ( x, y ) ≤ K . Let X be an i.i.d. sample of size m drawn according to a probability measure p on X , and let σ i bei.i.d and take values in {− , } with equal probability. We define the Rademacher average R m ( F , X ) := E σ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 σ i f ( x i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ( K/m ) / , where the upper bound is due to Bartlett and Mendelson (2002, Lemma 22). Similarly, wedefine R m ( F , p ) := E p,σ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 σ i f ( x i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . retton, Borgwardt, Rasch, Sch¨olkopf and Smola A.2 Bound when p and q May Differ
We want to show that the absolute difference between MMD( F , p, q ) and MMD b ( F , X, Y ) isclose to its expected value, independent of the distributions p and q . To this end, we provethree intermediate results, which we then combine. The first result we need is an upperbound on the absolute difference between MMD( F , p, q ) and MMD b ( F , X, Y ). We have | MMD( F , p, q ) − MMD b ( F , X, Y ) | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) sup f ∈ F ( E p ( f ) − E q ( f )) − sup f ∈ F m m X i =1 f ( x i ) − n n X i =1 f ( y i ) !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p ( f ) − E q ( f ) − m m X i =1 f ( x i ) + 1 n n X i =1 f ( y i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } ∆( p,q,X,Y ) . (20)Second, we provide an upper bound on the difference between ∆( p, q, X, Y ) and its expec-tation. Changing either of x i or y i in ∆( p, q, X, Y ) results in changes in magnitude of atmost 2 K / /m or 2 K / /n , respectively. We can then apply McDiarmid’s theorem, givena denominator in the exponent of m (cid:16) K / /m (cid:17) + n (cid:16) K / /n (cid:17) = 4 K (cid:18) m + 1 n (cid:19) = 4 K m + nmn , to obtain Pr (∆( p, q, X, Y ) − E X,Y [∆( p, q, X, Y )] > ǫ ) ≤ exp (cid:18) − ǫ mn K ( m + n ) (cid:19) . (21)For our final result, we exploit symmetrisation, following e.g. van der Vaart and Wellner(1996, p. 108), to upper bound the expectation of ∆( p, q, X, Y ). Denoting by X ′ an i.i.d Kernel Method for the Two-Sample Problem sample of size m drawn independently of X (and likewise for Y ′ ), we have E X,Y [∆( p, q, X, Y )]= E X,Y sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p ( f ) − m m X i =1 f ( x i ) − E q ( f ) + 1 n n X i =1 f ( y j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = E X,Y sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E X ′ m m X i =1 f ( x ′ i ) ! − m m X i =1 f ( x i ) − E Y ′ n n X i =1 f ( y ′ j ) ! + 1 n n X i =1 f ( y j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ( a ) E X,Y,X ′ ,Y ′ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 f ( x ′ i ) − m m X i =1 f ( x i ) − n n X i =1 f ( y ′ j ) + 1 n n X i =1 f ( y j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = E X,Y,X ′ ,Y ′ ,σ,σ ′ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 σ i (cid:0) f ( x ′ i ) − f ( x i ) (cid:1) + 1 n n X i =1 σ ′ i (cid:0) f ( y ′ j ) − f ( y j ) (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ( b ) E X,X ′ σ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m m X i =1 σ i (cid:0) f ( x ′ i ) − f ( x i ) (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + E Y,Y ′ σ sup f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n n X i =1 σ i (cid:0) f ( y ′ j ) − f ( y j ) (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ( c ) R m ( F , p ) + R n ( F , q )] . ≤ ( d ) h ( K/m ) / + ( K/n ) / i , (22)where (a) uses Jensen’s inequality, (b) uses the triangle inequality, (c) substitutes Definition28 (the Rademacher average), and (d) bounds the Rademacher averages, also via Definition28. Having established our preliminary results, we proceed to the proof of Theorem 14. Proof [Theorem 14] Combining equations (21) and (22), givesPr (cid:16) ∆( p, q, X, Y ) − h ( K/m ) / + ( K/n ) / i > ǫ (cid:17) ≤ exp (cid:18) − ǫ mn K ( m + n ) (cid:19) . Substituting equation (20) yields the result.
A.3 Bound when p = q and m = n In this section, we derive the Theorem 15 result, namely the large deviation bound on theMMD when p = q and m = n . Note also that we consider only positive deviations ofMMD b ( F , X, Y ) from MMD( F , p, q ), since negative deviations are irrelevant to our hypoth-esis test. The proof follows the same three steps as in the previous section. The first stepin (20) becomesMMD b ( F , X, Y ) − MMD( F , p, q ) = MMD b ( F , X, X ′ ) −
0= sup f ∈ F m m X i =1 (cid:0) f ( x i ) − f ( x ′ i ) (cid:1)! . (23) retton, Borgwardt, Rasch, Sch¨olkopf and Smola The McDiarmid bound on the difference between (23) and its expectation is now a functionof 2 m observations in (23), and has a denominator in the exponent of 2 m (cid:0) K / /m (cid:1) =8 K/m . We use a different strategy in obtaining an upper bound on the expected (23),however: this is now E X,X ′ " sup f ∈ F m m X i =1 (cid:0) f ( x i ) − f ( x ′ i ) (cid:1) = 1 m E X,X ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m X i =1 (cid:0) φ ( x i ) − φ ( x ′ i ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = 1 m E X,X ′ m X i =1 m X j =1 (cid:0) k ( x i , x j ) + k ( x ′ i , x ′ j ) − k ( x i , x ′ j ) − k ( x ′ i , x j ) (cid:1) ≤ m (cid:2) m E x k ( x, x ) + 2 m ( m − E x,x ′ k ( x, x ′ ) − m E x,x ′ k ( x, x ′ ) (cid:3) = (cid:20) m E x,x ′ (cid:0) k ( x, x ) − k ( x, x ′ ) (cid:1)(cid:21) (24) ≤ (2 K/m ) / . (25)We remark that both (24) and (25) bound the amount by which our biased estimate of thepopulation MMD exceeds zero under H . Combining the three results, we find that under H , p X MMD b ( F , X, X ′ ) − (cid:20) m E x,x ′ (cid:0) k ( x, x ) − k ( x, x ′ ) (cid:1)(cid:21) > ǫ ! < exp (cid:18) − ǫ m K (cid:19) and p X (cid:16) MMD b ( F , X, X ′ ) − (2 K/m ) / > ǫ (cid:17) < exp (cid:18) − ǫ m K (cid:19) . Appendix B. Proofs for Asymptotic Tests
We derive results needed in the asymptotic test of Section 5. Appendix B.1 describes thedistribution of the empirical MMD under H (both distributions identical). Appendix B.2contains derivations of the second and third moments of the empirical MMD, also under H . B.1 Convergence of the Empirical MMD under H We describe the distribution of the unbiased estimator MMD u [ F , X, Y ] under the null hy-pothesis. In this circumstance, we denote it by MMD u [ F , X, X ′ ], to emphasise that thesecond sample X ′ is drawn independently from the same distribution as X . We thus obtain Kernel Method for the Two-Sample Problem the U-statisticMMD u [ F , X, X ′ ] = 1 m ( m − X i = j k ( x i , x j ) + k ( x ′ i , x ′ j ) − k ( x i , x ′ j ) − k ( x j , x ′ i ) (26)= 1 m ( m − X i = j h ( z i , z j ) , (27)where z i = ( x i , x ′ i ). Under the null hypothesis, this U-statistic is degenerate, meaning E z j h ( z i , z j ) = E x j k ( x i , x j ) + E x ′ j k ( x ′ i , x ′ j ) − E x ′ j k ( x i , x ′ j ) − E x j k ( x j , x ′ i )= 0 . The following theorem from Serfling (1980, Section 5.5.2) then applies.
Theorem 29
Assume
MMD u [ F , X, X ′ ] is as defined in (27), with E z ′ h ( z, z ′ ) = 0 , and fur-thermore assume < E z,z ′ h ( z, z ′ ) < ∞ . Then MMD u [ F , X, X ′ ] converges in distributionaccording to m MMD u [ F , X, X ′ ] D → ∞ X l =1 γ l (cid:0) χ l − (cid:1) , where χ l are independent chi squared random variables of degree one, and γ l are the solu-tions to the eigenvalue equation γ l ψ l ( u ) = Z h ( u, v ) ψ l ( v ) d Pr v . While this result is adequate for our purposes (since we do not explicitly use the quantities γ l in our subsequent reasoning), it does not make clear the dependence of the null distributionon the kernel choice. For this reason, we provide an alternative expression based on thereasoning of Anderson et al. (1994, Appendix), bearing in mind the following changes: • we do not need to deal with the bias terms S j seen by Anderson et al. (1994, Ap-pendix) that vanish for large sample sizes, since our statistic is unbiased (althoughthese bias terms drop faster than the variance); • we require greater generality, since we deal with distributions on compact metricspaces, and not densities on R d ; correspondingly, our kernels are not necessarily innerproducts in L between probability density functions (although this is a special case).Our first step is to express the kernel h ( z i , z j ) of the U-statistic in terms of an RKHS kernel˜ k ( x i , x j ) between feature space mappings from which the mean has been subtracted,˜ k ( x i , x j ) := h φ ( x i ) − µ [ p ] , φ ( x j ) − µ [ p ] i = k ( x i , x j ) − E x k ( x i , x ) − E x k ( x, x j ) + E x,x ′ k ( x, x ′ ) . The centering terms cancel (the distance between the two points is unaffected by an identicalglobal shift in both the points), meaning h ( z i , z j ) = ˜ k ( x i , x j ) + ˜ k ( y i , y j ) − ˜ k ( x i , y j ) − ˜ k ( x j , y i ) . retton, Borgwardt, Rasch, Sch¨olkopf and Smola Next, we write the kernel ˜ k ( x i , x j ) in terms of eigenfunctions ψ i ( x ) with respect to theprobability measure Pr x , ˜ k ( x, x ′ ) = ∞ X l =1 λ l ψ l ( x ) ψ l ( x ′ ) , where Z X ˜ k ( x, x ′ ) ψ i ( x ) d Pr x ( x ) = λ i ψ i ( x ′ )and Z X ψ i ( x ) ψ j ( x ) d Pr x ( x ) = δ ij . (28)Since E x ˜ k ( x, v ) = E x k ( x, v ) − E x,x ′ k ( x, x ′ ) − E x k ( x, v ) + E x,x ′ k ( x, x ′ )= 0 , then when λ i = 0 , we have that λ i E x ′ ψ i ( x ′ ) = Z X E x ′ ˜ k ( x, x ′ ) ψ i ( x ) d Pr x ( x )= 0 , and hence E x ψ i ( x ) = 0 . (29)We now use these results to transform the expression in (26). First,1 m X i = j ˜ k ( x i , x j ) = 1 m X i = j ∞ X l =1 λ l ψ l ( x i ) ψ l ( x j )= 1 m ∞ X l =1 λ l X i ψ l ( x i ) ! − X i ψ l ( x i ) → D ∞ X l =1 λ l ( y l − , where y l ∼ N (0 ,
1) are i.i.d., and the final relation denotes convergence in distribution, using(28) and (29), following Serfling (1980, Section 5.5.2). Likewise1 m X i = j ˜ k ( x ′ i , x ′ j ) → D ∞ X l =1 λ l ( z l − , where z l ∼ N (0 , m ( m − X i = j (cid:16) ˜ k ( x i , y j ) + ˜ k ( x j , y i ) (cid:17) → D ∞ X l =1 λ l y l z l . Kernel Method for the Two-Sample Problem
Combining these results, we get m MMD u ( F , X, X ′ ) → D ∞ X l =1 λ l (cid:0) y l + z l − − y l z l (cid:1) = ∞ X l =1 λ l (cid:2) ( y l − z l ) − (cid:3) . Note that y l − z l , being the difference of two independent Gaussian variables, has a normaldistribution with mean zero and variance 2. This is therefore a quadratic form in a Gaussianrandom variable minus an offset 2 P ∞ l =1 λ l . B.2 Moments of the Empirical MMD Under H In this section, we compute the moments of the U-statistic in Section 5, under the nullhypothesis conditions E z,z ′ h ( z, z ′ ) = 0 , (30)and, importantly, E z ′ h ( z, z ′ ) = 0 . (31)Note that the latter implies the former. Variance/2nd moment:
This was derived by Hoeffding (1948, p. 299), and is alsodescribed by Serfling (1980, Lemma A p. 183). Applying these results, E (cid:16)(cid:2) MMD u (cid:3) (cid:17) = (cid:18) n ( n − (cid:19) (cid:20) n ( n − n − E z (cid:2) ( E z ′ h ( z, z ′ )) (cid:3) + n ( n − E z,z ′ (cid:2) h ( z, z ′ ) (cid:3)(cid:21) = 2( n − n ( n − E z (cid:2) ( E z ′ h ( z, z ′ )) (cid:3) + 2 n ( n − E z,z ′ (cid:2) h ( z, z ′ ) (cid:3) = 2 n ( n − E z,z ′ (cid:2) h ( z, z ′ ) (cid:3) , where the first term in the penultimate line is zero due to (31). Note that variance and 2ndmoment are the same under the zero mean assumption. We consider the terms that appear in the expansion of E (cid:16)(cid:2) MMD u (cid:3) (cid:17) .These are all of the form (cid:18) n ( n − (cid:19) E ( h ab h cd h ef ) , where we shorten h ab = h ( z a , z b ), and we know z a and z b are always independent. Most ofthe terms vanish due to (30) and (31). The first terms that remain take the form (cid:18) n ( n − (cid:19) E ( h ab h bc h ca ) , retton, Borgwardt, Rasch, Sch¨olkopf and Smola and there are n ( n − n − (cid:18) n ( n − (cid:19) n ( n − n − E z,z ′ (cid:2) h ( z, z ′ ) E z ′′ (cid:0) h ( z, z ′′ ) h ( z ′ , z ′′ ) (cid:1)(cid:3) = 8( n − n ( n − E z,z ′ (cid:2) h ( z, z ′ ) E z ′′ (cid:0) h ( z, z ′′ ) h ( z ′ , z ′′ ) (cid:1)(cid:3) . (32)Note the scaling n − n ( n − ∼ n . The remaining non-zero terms, for which a = c = e and b = d = f , take the form (cid:18) n ( n − (cid:19) E z,z ′ (cid:2) h ( z, z ′ ) (cid:3) , and there are n ( n − of them, which gives (cid:18) n ( n − (cid:19) E z,z ′ (cid:2) h ( z, z ′ ) (cid:3) . (33)However (cid:16) n ( n − (cid:17) ∼ n − so this term is negligible compared with (32). Thus, a reasonableapproximation to the third moment is E (cid:16)(cid:2) MMD u (cid:3) (cid:17) ≈ n − n ( n − E z,z ′ (cid:2) h ( z, z ′ ) E z ′′ (cid:0) h ( z, z ′′ ) h ( z ′ , z ′′ ) (cid:1)(cid:3) . Acknowledgements:
We thank Philipp Berens, Olivier Bousquet, John Langford, OmriGuttman, Matthias Hein, Novi Quadrianto, Le Song, and Vishy Vishwanathan for con-structive discussions; Patrick Warnat (DKFZ, Heidelberg), for providing the microarraydatasets; and Nikos Logothetis, for providing the neural datasets. National ICT Australiais funded through the Australian Government’s
Backing Australia’s Ability initiative, inpart through the Australian Research Council. This work was supported in part by theIST Programme of the European Community, under the PASCAL Network of Excellence,IST-2002-506778, and by the Austrian Science Fund (FWF), project
References
Y. Altun and A.J. Smola. Unifying divergence minimization and statistical inference viaconvex duality. In H.U. Simon and G. Lugosi, editors,
Proc. Annual Conf. ComputationalLearning Theory , LNCS, pages 139–153. Springer, 2006.N. Anderson, P. Hall, and D. Titterington. Two-sample test statistics for measuring discrep-ancies between two multivariate probability density functions using kernel-based densityestimates.
Journal of Multivariate Analysis , 50:41–54, 1994.S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. In S. Becker, S. Thrun, and K. Obermayer, editors,
Advances inNeural Information Processing Systems 15 . MIT Press, 2003. Kernel Method for the Two-Sample Problem
M. Arcones and E. Gin´e. On the bootstrap of u and v statistics. The Annals of Statistics ,20(2):655–674, 1992.F. R. Bach and M. I. Jordan. Kernel independent component analysis.
J. Mach. Learn.Res. , 3:1–48, 2002.P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds andstructural results.
J. Mach. Learn. Res. , 3:463–482, 2002.S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations fordomain adaptation. In
NIPS 19 , pages 137–144. MIT Press, 2007.G. Biau and L. Gyorfi. On the asymptotic properties of a nonparametric l -test statistic ofhomogeneity. IEEE Transactions on Information Theory , 51(11):3965–3973, 2005.P. Bickel. A distribution free version of the Smirnov two sample test in the p-variate case.
The Annals of Mathematical Statistics , 40(1):1–23, 1969.C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. URL .K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, andH. P. Kriegel. Protein function prediction via graph kernels.
Bioinformatics , 21(Suppl1):i47–i56, Jun 2005.K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Sch¨olkopf, and A. J. Smola.Integrating structured biological data by kernel maximum mean discrepancy.
Bioinfor-matics (ISMB) , 22(14):e49–e57, 2006.O. Bousquet, S. Boucheron, and G. Lugosi. Theory of classification: a survey of recentadvances.
ESAIM: Probab. Stat. , 9:323– 375, 2005.R. Caruana and T. Joachims. Kdd cup. http://kodiak.cs.cornell.edu/kddcup/index.html,2004.G. Casella and R. Berger.
Statistical Inference . Duxbury, Pacific Grove, CA, 2nd edition,2002.B. Chazelle. A minimum spanning tree algorithm with inverse-ackermann type complexity.
Journal of the ACM , 47, 2000.P. Comon. Independent component analysis, a new concept?
Signal Processing , 36:287–314,1994.M. Davy, A. Gretton, A. Doucet, and P. J. W. Rayner. Optimized support vector machinesfor nonstationary signal classification.
IEEE Signal Processing Letters , 9(12):442–445,2002.R. Der and D. Lee. Large-margin classification in banach spaces. In
AISTATS 11 , 2007. retton, Borgwardt, Rasch, Sch¨olkopf and Smola M. Dud´ık and R. E. Schapire. Maximum entropy distribution estimation with general-ized regularization. In G´abor Lugosi and Hans U. Simon, editors,
Proc. Annual Conf.Computational Learning Theory . Springer Verlag, June 2006.M. Dud´ık, S. Phillips, and R.E. Schapire. Performance guarantees for regularized maxi-mum entropy density estimation. In
Proc. Annual Conf. Computational Learning Theory .Springer Verlag, 2004.R. M. Dudley.
Real analysis and probability . Cambridge University Press, Cambridge, UK,2002.W. Feller.
An Introduction to Probability Theory and its Applications . John Wiley andSons, New York, 2 edition, 1971.Andrey Feuerverger. A consistent test for bivariate dependence.
International StatisticalReview , 61(3):419–433, 1993.S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations.
Journal of Machine Learning Research , 2:243–264, Dec 2001.R. Fortet and E. Mourier. Convergence de la r´eparation empirique vers la r´eparationth´eorique.
Ann. Scient. ´Ecole Norm. Sup. , 70:266–285, 1953.J. Friedman and L. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnovtwo-sample tests.
The Annals of Statistics , 7(4):697–717, 1979.K. Fukumizu, A. Gretton, X. Sun, and B. Sch¨olkopf. Kernel measures of conditional de-pendence. In
Advances in Neural Information Processing Systems 20 , 2008.T. G¨artner, P. A. Flach, A. Kowalczyk, and A. J. Smola. Multi-instance kernels. In
Proc.Intl. Conf. Machine Learning , 2002.E. Gokcay and J.C. Principe. Information theoretic clustering.
IEEE Transactions onPattern Analysis and Machine Intelligence , 24(2):158–171, 2002.A. Gretton, O. Bousquet, A.J. Smola, and B. Sch¨olkopf. Measuring statistical dependencewith Hilbert-Schmidt norms. In S. Jain, H. U. Simon, and E. Tomita, editors,
ProceedingsAlgorithmic Learning Theory , pages 63–77, Berlin, Germany, 2005a. Springer-Verlag.A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch¨olkopf. Kernel methods formeasuring independence.
J. Mach. Learn. Res. , 6:2075–2129, 2005b.A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for thetwo-sample-problem. In
Advances in Neural Information Processing Systems 19 , pages513–520, Cambridge, MA, 2007a. MIT Press.A. Gretton, K. Borgwardt, M. Rasch, B. Schlkopf, and A. Smola. A kernel approach tocomparing distributions.
Proceedings of the 22nd Conference on Artificial Intelligence(AAAI-07) , pages 1637–1641, 2007b. Kernel Method for the Two-Sample Problem
A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch¨olkopf, and A. Smola. A kernelstatistical test of independence. In
Neural Information Processing Systems , 2008.G. R. Grimmet and D. R. Stirzaker.
Probability and Random Processes . Oxford UniversityPress, Oxford, third edition, 2001.P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensionalsettings.
Biometrika , 89(2):359–374, 2002.Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel fisher dis-criminant analysis. In
NIPS 20 . MIT Press, 2008.M. Hein, T.N. Lal, and O. Bousquet. Hilbertian metrics on probability measures and theirapplication in svm’s. In
Proceedings of the 26th DAGM Symposium , pages 270–277,Berlin, 2004. Springer.N. Henze and M. Penrose. On the multivariate runs test.
The Annals of Statistics , 27(1):290–298, 1999.W. Hoeffding. Probability inequalities for sums of bounded random variables.
Journal ofthe American Statistical Association , 58:13–30, 1963.Wassily Hoeffding. A class of statistics with asymptotically normal distribution.
The Annalsof Mathematical Statistics , 19(3):293–325, 1948.T. Jebara and I. Kondor. Bhattacharyya and expected likelihood kernels. In B. Sch¨olkopfand M. Warmuth, editors,
Proceedings of the Sixteenth Annual Conference on Computa-tional Learning Theory , number 2777 in Lecture Notes in Computer Science, pages 57–71,Heidelberg, Germany, 2003. Springer-Verlag.N. L. Johnson, S. Kotz, and N. Balakrishnan.
Continuous Univariate Distributions. Volume1 (Second Edition) . John Wiley and Sons, 1994.A. Kankainen.
Consistent Testing of Total Independence Based on the Empirical Charac-teristic Function . PhD thesis, University of Jyv¨askyl¨a, 1995.D. Kifer, S. Ben-David, and J. Gehrke. Detecting change in data streams. In
Very LargeDatabases (VLDB) , 2004.H.W. Kuhn. The Hungarian method for the assignment problem.
Naval Research LogisticsQuarterly , 2:83–97, 1955.C. McDiarmid. On the method of bounded differences. In
Survey in Combinatorics , pages148–188. Cambridge University Press, 1989.J. Mercer. Functions of positive and negative type and their connection with the theory ofintegral equations.
Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. , A 209:415–446, 1909.C. Micchelli, Y. Xu, and H. Zhang. Universal kernels.
Journal of Machine Learning Re-search , 7:2651–2667, 2006. retton, Borgwardt, Rasch, Sch¨olkopf and Smola A. M¨uller. Integral probability metrics and their generating classes of functions.
Adv. Appl.Prob. , 29:429–443, 1997.XuanLong Nguyen, Martin Wainwright, and Michael Jordan. Estimating divergence func-tionals and the likelihood ratio by penalized convex risk minimization. In
NIPS 20 . MITPress, 2008.W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery.
Numerical Recipesin C. The Art of Scientific Computation . Cambridge University Press, Cambridge, UK,1994.M. Rasch, A. Gretton, Y. Murayama, W. Maass, and N. Logothetis. Inferring spike trainsfrom local field potentials.
Journal of Neurophysiology , 99:1461–1476, 2008.P. Rosenbaum. An exact distribution-free test comparing two multivariate distributionsbased on adjacency.
Journal of the Royal Statistical Society B , 67(4):515–530, 2005.Y. Rubner, C. Tomasi, and L.J. Guibas. The earth mover’s distance as a metric for imageretrieval.
Int. J. Comput. Vision , 40(2):99–121, 2000. doi: http://dx.doi.org/10.1023/A:1026543900054.B. Sch¨olkopf.
Support Vector Learning
Learning with Kernels . MIT Press, Cambridge, MA, 2002.B. Sch¨olkopf, J. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating thesupport of a high-dimensional distribution.
Neural Comput. , 13(7):1443–1471, 2001.B. Sch¨olkopf, K. Tsuda, and J.-P. Vert.
Kernel Methods in Computational Biology . MITPress, Cambridge, MA, 2004.R. Serfling.
Approximation Theorems of Mathematical Statistics . Wiley, New York, 1980.J. Shawe-Taylor and N. Cristianini.
Kernel Methods for Pattern Analysis . CambridgeUniversity Press, Cambridge, UK, 2004.J. Shawe-Taylor and A. Dolia. A framework for probability density estimation. In M. Meilaand X. Shen, editors,
Proceedings of International Workshop on Artificial Intelligenceand Statistics , 2007.B. W. Silverman.
Density estimation for statistical and data analysis . Monographs onstatistics and applied probability. Chapman and Hall, London, 1986.N.V. Smirnov. On the estimation of the discrepancy between empirical curves of distributionfor two independent samples.
Bulletin Mathematics , 2:3–26, 1939. University of Moscow.A. J. Smola and B. Sch¨olkopf. Sparse greedy matrix approximation for machine learning.In P. Langley, editor,
Proc. Intl. Conf. Machine Learning , pages 911–918, San Francisco,2000. Morgan Kaufmann Publishers. Kernel Method for the Two-Sample Problem
A.J. Smola, A. Gretton, L. Song, and B. Sch¨olkopf. A hilbert space embedding for distribu-tions. In E. Takimoto, editor,
Algorithmic Learning Theory , Lecture Notes on ComputerScience. Springer, 2007.L. Song, X. Zhang, A. Smola, A. Gretton, and B. Sch¨olkopf. Tailoring density estimationvia reproducing kernel moment matching. In
ICML , 2008. to appear.B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch¨olkopf. Injectivehilbert space embeddings of probability measures. In
COLT , 2008. to appear.I. Steinwart. On the influence of the kernel on the consistency of support vector machines.
J. Mach. Learn. Res. , 2:67–93, 2001.I. Takeuchi, Q.V. Le, T. Sears, and A.J. Smola. Nonparametric quantile estimation.
J.Mach. Learn. Res. , 2006.D. M. J. Tax and R. P. W. Duin. Data domain description by support vectors. In M. Ver-leysen, editor,
Proceedings ESANN , pages 251–256, Brussels, 1999. D Facto.A. W. van der Vaart and J. A. Wellner.
Weak Convergence and Empirical Processes .Springer, 1996.J. E. Wilkins. A note on skewness and kurtosis.
The Annals of Mathematical Statistics , 15(3):333–335, 1944.Christoper K. I. Williams and Matthias Seeger. Using the Nystrom method to speed upkernel machines. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors,
Advances inNeural Information Processing Systems 13 , pages 682–688, Cambridge, MA, 2001. MITPress., pages 682–688, Cambridge, MA, 2001. MITPress.