Parameter identifiability in a class of random graph mixture models
PParameter identifiability in a class of random graph mixturemodels
Elizabeth S. Allman a , Catherine Matias b, ∗ , John A. Rhodes a a Department of Mathematics and Statistics, University of Alaska Fairbanks, PO Box 756660,Fairbanks, AK 99775, U.S.A b CNRS UMR 8071, Laboratoire Statistique et G´enome, 523, place des Terrasses de l’Agora, 91 000´Evry, FRANCE
Abstract
We prove identifiability of parameters for a broad class of random graph mixture models.These models are characterized by a partition of the set of graph nodes into latent (un-observable) groups. The connectivities between nodes are independent random variableswhen conditioned on the groups of the nodes being connected. In the binary randomgraph case, in which edges are either present or absent, these models are known asstochastic blockmodels and have been widely used in the social sciences and, more re-cently, in biology. Their generalizations to weighted random graphs, either in parametricor non-parametric form, are also of interest in many areas. Despite a broad range of ap-plications, the parameter identifiability issue for such models is involved, and previouslyhas only been touched upon in the literature. We give here a thorough investigation ofthis problem. Our work also has consequences for parameter estimation. In particular,the estimation procedure proposed by Frank and Harary for binary affiliation models isrevisited in this article.
Keywords:
Identifiability, mixture model, random graph, stochastic blockmodel
1. Introduction
In modern statistical analyses, data is often structured using networks. Complexnetworks appear across many fields of science, including biology (metabolic networks,transcriptional regulatory networks, protein-protein interaction networks), sociology (so-cial networks of acquaintance, or other connections between individuals), communications(the Internet), and others.The literature contains many random graph models which incorporate a variety ofcharacteristics of real-world graphs (such as scale-free or small-world properties). Werefer to Newman (2003) and the references therein for an interesting introduction to ∗ Corresponding author
Email addresses: [email protected] (Elizabeth S. Allman), [email protected] (Catherine Matias), [email protected] (John A. Rhodes)
Preprint submitted to Elsevier November 8, 2018 a r X i v : . [ m a t h . S T ] J un etworks.One of the earliest and most studied random graph models was formulated by Erd˝osand R´enyi (1959). In this setup, binary random graphs are modeled as a set of indepen-dent and identically distributed Bernoulli edge variables over a fixed set of nodes. Thehomogeneity of this model led to the introduction of mixture versions to better captureheterogeneity in data. Stochastic blockmodels (Daudin et al., 2008; Frank and Harary,1982; Holland et al., 1983; Snijders and Nowicki, 1997) were introduced in various forms,primarily in the social sciences (White et al., 1976) to study relational data, and morerecently in biology (Picard et al., 2009). In this context, the nodes are partitioned intolatent groups (blocks) characterizing the relations between nodes. Blockmodelling thusrefers to the particular structure of the adjacency matrix of the graph ( i.e. , the matrixcontaining edge indicators). By ordering the nodes by the groups to which they belong,this matrix exhibits a block pattern. Diagonal and off-diagonal blocks, respectively, rep-resent intra-group and inter-group connections. In the special case where blocks exhibitthe same behavior within their type (diagonal or off-diagonal), we obtain a model withan affiliation structure (Frank and Harary, 1982).Although the literature from the social sciences has focused mostly on binary rela-tions, there is a growing interest in weighted graphs (Barrat et al., 2004; Newman, 2004).Mixture models have also been considered in the case of a finite number of possible re-lations (Nowicki and Snijders, 2001), and more recently with continuous edge variables(Ambroise and Matias, 2010; Mariadassou and Robin, 2010). Some variations that weshall not discuss here include models with covariates (Tallberg, 2005), mixed membershipmodels (Airoldi et al., 2008; Latouche et al., 2009), and models with continuous latentvariables (Daudin et al., 2010; Handcock et al., 2007). We also note that Newman andLeicht (2007) proposed another version of a binary mixture model, slightly different fromthe stochastic blockmodel considered here.Many different parameter estimation procedures have been proposed for these models,such as Bayesian methods (Nowicki and Snijders, 2001; Snijders and Nowicki, 1997), vari-ational Expectation-Maximization (EM) procedures (Daudin et al., 2008; Picard et al.,2009), online classification EM methods (Zanghi et al., 2008, 2010) and more recently,direct mixture model based approaches (Ambroise and Matias, 2010). Consistency of allthese procedures relies strongly on the identifiability of the model parameters. However,the literature on these models has not addressed this question in any depth. The triviallabel-swapping problem is often mentioned: it is well known that the parameters may berecovered only up to permutations on the latent class labels. Whether this is the onlyissue preventing unique identification of parameters from the distribution, however, hasnever been investigated. Given the complex form of the model parameterization, this isnot surprising, as any such analysis seems likely to be very involved.In earlier work, (Allman et al., 2009, Theorem 7), the authors made a first steptowards an understanding of the parameter identifiability issue in binary random graphmixture models. While that article addressed a variety of models with latent variables,the present one focuses more specifically on random graph mixtures, giving parameteridentifiability results for a broad range of such models. Moreover, part of our work shedssome new light on parameter estimation procedures.Allman et al. (2009) emphasized the usefulness of an algebraic theorem due to Kruskal21976, 1977) (see also Rhodes, 2010) to establish identifiability results in various modelswhose common feature is the presence of latent groups and at least three conditionallyindependent variables. Here, we rather focus on the family of random graph mixturemodels and explore various techniques to establish their parameters’ identifiability. Thuswhile the method developed by Allman et al. (2009) is presented in Section 5.1 and findsfurther use in several arguments, it is only one of several techniques we use. The issueat the core of Kruskal’s result is the decomposition of a 3-way array as a sum of rankone tensors. While there exist approximate methods of performing this decomposition(see, e.g., Tomasi and Bro, 2006), we mention that this approach seems poorly-suited toexplicitly recover the parameters from the distribution, and thus to construct estimationprocedures.Some of our results focus on moment equations, as did those of Frank and Harary(1982), in one of the earliest works on binary affiliation models. In particular, we revisitsome of their claims. The method consists in looking at the distribution of K n , a completeset of edge variables over a set of n nodes. A natural question is then: What is the minimalvalue of n such that the complete distribution over all edge variables (a potentially infiniteset) is characterized by the distribution of K n ? Despite this question’s simplicity, we arefar from having a complete answer to it. When looking at finite state distributions ( e.g. ,for binary random graphs), the knowledge of the distribution of K n is equivalent to theknowledge of a certain set of moments of the distribution. Expressing the moments interms of parameters gives a nonlinear polynomial system of equations, which one uses toidentify parameters. The uniqueness of solutions to those systems, up to label swappingon parameters, is the issue at stake for identifiability.For random graphs with continuous edge weights given by a parametric family ofdistributions we shall see that the information contained in the model might be recoveredfrom the distribution of K n for very small values of n . In this case, we rely on classicalresults on the identifiability of the parameters of a multivariate mixture due to Teicher(1967). Note that the main difference between classical mixtures and random graphmixtures is the non-independence of the variates.In contrast to the approach based on Kruskal’s Theorem, both the method utilizingmoment equations and the one relying on multivariate mixtures lead to practical estima-tion procedures. These are further developed by Ambroise and Matias (2010).In Allman et al. (2009), a large role was played by the notion of generic identifiabil-ity , by which every parameter except those lying on a proper algebraic subvariety, areidentifiable. In other words, in a parametric setting, the non-identifiable parameters areincluded in a subset whose dimension is strictly smaller than the dimension of the fullparameter space. Thus with probability one with respect to the Lebesgue measure, ev-ery parameter is identifiable. This notion of generic identifiability is important for finitemixtures of multivariate Bernoulli distributions (Allman et al., 2009; Carreira-Perpi˜n´anand Renals, 2000; Gyllenberg et al., 1994) and also for hidden Markov models (Allmanet al., 2009; Petrie, 1969). Here, we stress that some of our identifiability results aregeneric, while others are strict.Finally, we note that our focus throughout will be on undirected graph models. Whilemany of our results may be generalized to directed graphs, one must pay careful atten-tion to the models’ parametrization in doing so. For instance, some of the results would3ecome simpler if the connectivities from group q to group l differed from those fromgroup l to group q , as symmetry in a model can have a strong impact on identifiabilityquestions. However, such asymmetric models require an increase in the number of pa-rameters which may be excessive for data analysis.This paper is organized as follows. Section 2 presents the various random graph mix-ture models: with either binary or, more generally, finite-state edges; both parametricand non-parametric models for edges with continuous weights; and the particular affilia-tion variant of these models. Section 3 gives parameter identifiability results for binaryrandom graphs. Note that the affiliation model has to be handled separately. Section 4takes up weighted random graphs, in both parametric and non-parametric variants. Allthe proofs are postponed to Section 5. In particular, Section 5.1 is devoted to a briefpresentation of Kruskal’s result and our use of it in the proofs of Theorems 2 and 14.
2. Notation and models
We consider a probabilistic model on undirected and possibly weighted graphs asfollows. Let n be a fixed number of nodes, with Z , . . . , Z n independent identicallydistributed (i.i.d.) random variables, taking values in Z = { , . . . , Q } for some Q ≥ Q groups the nodes are partitioned among, andare used to introduce heterogeneity in the model. With π q = P ( Z i = q ) ∈ (0 , (cid:80) q π q = 1, the vector π = ( π q ) thus gives the priors on the groups. Let { X ij } ≤ i 1] is a sparsity parameter, δ is the Dirac mass at zero and F ql is aprobability measure on X with density f ql with respect to either the counting measureon N or the Lebesgue measure on R or R s . We also implicitly assume µ ql = µ lq , forall 1 ≤ q, l ≤ Q . In the parametric case, we assume moreover that F ql = F ( · , θ ql ) and f ql = f ( · , θ ql ) where the parameter θ ql belongs to Θ ⊂ R p . In the non-parametric casewe assume F ql is absolutely continuous.We shall always assume that F ql has no point mass at zero, otherwise the sparsityparameter p ql cannot be identified from the mixture µ ql . For instance, when consideringPoisson weights, f ql is the Poisson density truncated at zero, f ql ( k ) = θ kql k ! ( e θ ql − − , k ≥ . A particular instance of these models is the affiliation one, which assumes additionallyonly two distributions of connections between the edges, one for intra-group connectionsand another for inter-group connections. Thus the binary state case of the affiliationmodel assumes p ql = (cid:40) α if q = l,β if q (cid:54) = l, for all q, l ∈ { , . . . , Q } . The affiliation model in the continuous observations case is described similarly with µ ql = µ in q = l + µ out q (cid:54) = l , for all 1 ≤ q, l ≤ Q . More precisely, in the continuous parametriccase, for all q, l ∈ { , . . . , Q } we set p ql = (cid:40) α if q = l,β if q (cid:54) = l, and θ ql = (cid:40) θ in if q = l,θ out if q (cid:54) = l. For all these models, we consider restrictions of the model distribution by focusing ona subset of the nodes. We denote by K n the complete set of (cid:0) n (cid:1) edge variables associatedto a subset of n nodes. Note that the distribution of these variables is independent ofthe choice of which n nodes one considers. Also, while this notation is motivated by thatused in graph theory, where K n denotes the complete graph on n nodes, we emphasizethat here K n is a set of random variables, and we are making no statement as to whetherthese edges are present or absent in any realization of our model. 3. Binary random graphs We first focus on models with binary edge states, considering the more general casewith arbitrary connectivity parameters, followed by affiliation models.5 .1. The non-affiliation case When X = { , } , a first result on identifiability of parameters was obtained byAllman et al. (2009) for the special case of Q = 2 groups. For completeness, we recallthe statement here. Theorem 1. (Allman et al., 2009, Theorem 7). The parameters π , π = 1 − π , p , p , p of the random graph mixture model with binary edge state variables and Q = 2 groupsare identifiable, up to label swapping, from the distribution of K provided that the con-nectivity parameters { p , p , p } are distinct.In particular, the result remains valid when the group proportions π q are fixed. Note the assumption that p (cid:54) = p limits this theorem to the strict non-affiliationcase.The proof of this theorem is based on a clever application of an algebraic result, dueto Kruskal (1976, 1977) (see also Rhodes, 2010), that deals with decompositions of 3-wayarrays. While generalizing the proof to more than two groups requires substantially moreeffort, the basic method still applies. Here we prove the following theorem. Theorem 2. The parameters π q , ≤ q ≤ Q , and p ql = P ( X ij = 1 | Z i = q, Z j = l ) , ≤ q ≤ l ≤ Q , of the random graph mixture model with binary edge state variables and Q ≥ groups are generically identifiable, up to label swapping, from the distribution of K m , when (cid:26) m ≥ Q − Q + 2) / if Q is even ,m ≥ Q − Q + 1)( Q + 3) / if Q is odd . Moreover, the result remains valid when the group proportions π q are fixed. Note that the stated number of nodes ensuring that parameters are generically identi-fiable from the distribution of the edges may not be optimal. In particular, when Q = 2,the proof of this theorem is still valid, yet it gives a minimal number of m = 25 nodes.This is larger than the bound 16 obtained in Theorem 1, and that number may itself notbe optimal.Also, while Theorem 1 gives exact restrictions on parameters producing identifiability,Theorem 2 is not explicit about the generic conditions. However, for any fixed Q theargument in our proof does yield a straightforward, though perhaps lengthy, means ofchecking whether a particular choice of parameters meets the conditions. Among theseis a requirement that the p ql be distinct, so the theorem does not apply to the affiliationmodel.Moreover, a careful reading of the proof of the theorem shows that its generic aspectconcerns only the part of the parameter space with the connectivities p ql . This enablesus to conclude that even when considering subsets defined by restriction of the groupproportions π q (for instance assuming the group proportions are fixed, or equal), theresult remains valid. In the particular case of the affiliation model, we can obtain results from argumentsbased on moments of the distribution. For a small number of nodes, one may obtainexplicit formulas for the moments in terms of model parameters. By analyzing thesolutions to this nonlinear multivariate polynomial system of equations, one can addressthe question of parameter identifiability, as well as develop estimation procedures.6 .2.1. Relying on the distribution of K . Frank and Harary (1982) presented a method for estimation of the parameters of thebinary affiliation model based only on the distribution of triplet cycles ( X ij , X jk , X ki ),1 ≤ i < j < k ≤ n , of edge variables. From an identifiability perspective, this correspondsto identifying the parameters from the distribution of K . They suggest estimation of theparameters by solving the empirical moment equations. However, they omit discussinguniqueness of the solutions to these equations, even though this issue is a delicate onefor nonlinear equations.In the following, we first explore the use of the distribution of only K to identifymodel parameters. As a consequence, we exhibit a new estimation procedure for theparameters.The distribution of a triplet ( X ij , X jk , X ki ), is expressible in terms of the indetermi-nates α, β and π q s. Let us denote by s and s the sums of the squares and cubes of the π q s and, more generally, let s k = Q (cid:88) q =1 π kq . Then one easily computes (see also Frank and Harary, 1982) the moment formulas m = E ( X ij ) = s α + (1 − s ) β, (1) m = E ( X ij X ik ) = s α + 2( s − s ) αβ + (1 − s + s ) β , (2) m = E ( X ij X ik X jk ) = s α + 3( s − s ) αβ + (1 − s + 2 s ) β , (3)which completely characterize the distribution of ( X ij , X jk , X ki ).Note that in the important case of a uniform node distribution, where π q = 1 /Q forall q , we have s k = Q − k . This implies s = s , and hence m = m , so these equationsreduce to two independent ones. As a consequence, the claim by Frank and Harary(1982) that it is then possible to estimate the three unknowns Q, α, β relying only onthese moment equations is not correct.Still, there are indeed several situations in which parameters are identifiable fromthese moments, as we next discuss.With Q = 2 latent groups and a possibly non-uniform group distribution, there are 3independent parameters in the affiliation model. In this case, the three moments aboveare enough to identify parameters. To show this, we first construct certain polynomialswith roots at the connectivity parameters. Since the construction easily extends to larger Q , we give it more generally. Proposition 3. Consider the random graph affiliation mixture model with Q ≥ groupsand binary edge state variables, on Q + 1 nodes. Then the parameter α is a real root ofthe degree (cid:0) Q +12 (cid:1) univariate polynomial U Q ( X ) = E (cid:89) ≤ i 3. In the case of Q = 2 groups, however, we prove that these polynomialsuniquely identify the parameters. Theorem 4. In the random graph affiliation mixture model with Q = 2 groups andbinary edge state variables, the parameter α is the unique real root of the polynomial U ( X ) = X − m X + 3 m X − m . Moreover, as soon as α (cid:54) = β , the parameter β is the unique real root of the polynomial V ( α, Y ) where V ( X, Y ) = X + XY − m X − m Y + 2 m . Once α and β are uniquely identified, we may determine from equation (1) the valueof s (again using that α (cid:54) = β ), and hence π , π , up to permutation. This proves thefollowing corollary. Corollary 5. The parameters { π , π = 1 − π } , up to label swapping, and α, β of therandom graph affiliation mixture model with Q = 2 groups and binary edge state variablesare strictly identifiable from the distribution of K provided α (cid:54) = β .Identifiability of α and β when Q and the π q s are known. When the π q s are known,Frank and Harary (1982) suggested solving any two of the three empirical counterpartsof equations (1), (2) and (3), leading to three different methods of estimating α and β . However, numerical experiments convinced us that two equations are in general notsufficient to uniquely determine the parameters. In fact, it is not immediately clearthat even with the three moment equations (either the theoretical ones for the questionof identification, or their empirical counterparts for estimation) a unique solution isdetermined. Below we give explicit formulas for the solution to the system, which inmost cases are even rational, involving no extraction of roots. These can thus be easilyused to construct estimators. Theorem 6. If m (cid:54) = m , then π is non-uniform and we can recover the parameters β and α via the rational formulas β = ( s − s s ) m + ( s − s ) m m + ( s s − s ) m ( m − m )(2 s − s s + s ) ,α = m + ( s − βs . f m = m , then π is uniform and we have β = m + (cid:18) m − m Q − (cid:19) / and α = Qm + (1 − Q ) β. Implicit in this statement is the fact that denominators in the above formulas are non-zero. Note that the uniform group prior case formula is used for estimation by Ambroiseand Matias (2010).We immediately obtain the following corollary. Corollary 7. For any fixed and known values of π q ∈ (0 , , ≤ q ≤ Q , both parameters α, β of the random graph affiliation model with binary edge state variables are identifiablefrom the distribution of K . The proofs of the previous statements lead to an interesting polynomial in the mo-ments, whose vanishing detects the Erd˝os-R´enyi model, corresponding to a single nodegroup. Proposition 8. The moments of a random graph affiliation model with binary edge statevariables, Q node states, and α (cid:54) = β satisfy m − m m + m = 0 if, and only if, Q = 1 . This proposition follows from expressing the moments in terms of parameters to seethat 2 m − m m + m = ( α − β ) (2 s − s s + s ) , together with the determination in the proof of Lemma 19 in Section 5.3 that 2 s − s s + s (cid:54) = 0 when π q > q . K We next investigate parameter identifiability from the distribution of the edge vari-ables over more than 3 nodes, paying particular attention to the case of n = 4 nodes. Necessary conditions for identifiability of the π q s, when Q is known. First, we establishthat for an affiliation model, if the π q s are unknown and are to be recovered from thedistribution of K n , then one must look at at least n = Q nodes. Note that this appliesnot only to the binary edge state model, but to more general weighted edge models aswell. Proposition 9. In order to identify, up to label swapping, the parameters { π q } ≤ q ≤ Q from an affiliation random graph mixture distribution on K n (either binary or weighted),it is necessary that n ≥ Q . The condition in this lemma is in general not sufficient to identify the π q . Indeed,the binary edge state affiliation model with Q = 3 has 4 parameters. However, the set ofdistributions over K has dimension at most 3 (according to equations (1),(2) and (3)),which is not sufficient to identify the 4 parameters.9 E ( X ) s α + (1 − s ) βm E ( X X ) s α + 2 αβ ( s − s ) + (1 − s + s ) β m E ( X X X ) s α + 3( s − s ) αβ + (1 − s + 2 s ) β m E ( X X X ) s α + 3( s − s ) α β + 3( s − s + s ) αβ +(1 − s + 3 s − s ) β m E ( X X X ) s α + ( s + 2 s − s ) α β + (3 s − s − s + 3 s ) αβ +(1 − s + s + 2 s − s ) β m E ( X X s α + 2( s + 2 s − s ) α β + 4( s − s − s + 2 s ) αβ X X ) +(1 − s + 2 s + 4 s − s ) β m E ( X X s α + ( s − s ) α β + ( s + 2 s − s ) α β X X ) +(4 s − s − s + 5 s ) αβ + (1 − s + s + 4 s − s ) β m E ( X X X s α + 2( s − s ) α β + (2 s − s + 2 s ) α β X X ) +(5 s − s − s + 9 s ) αβ + (1 − s + 2 s + 6 s − s ) β m E ( X X X s α + 4( s − s ) α β + 3( s − s ) α β X X X ) +6( s − s − s + 2 s ) αβ + (1 − s + 8 s − s + 3 s ) β Table 1: Moment formulas describing the distribution of K , the complete graph on 4 nodes, for thebinary affiliation model. Distribution on K . The moment formulas describing the distribution of the affiliationrandom graph mixture model on K are given in Table 1. Note that m is the sameas m in the last subsection, and that we omit E ( X X ) = ( E ( X )) since edgevariables with no endpoints in common are independent. To facilitate understanding ofthe moments in the table, their corresponding induced motifs are shown in Figure 1. m m m m m m m m m Figure 1: Correspondence between moments and motifs for K . With Q arbitrary, but a uniform prior on the nodes ( π q = 1 /Q , so s i = Q − i ), thereare algebraic relationships between the moments on K , including m = m , m = m = m , m = m m , and more complicated ones that can be computed using Gr¨obner basis methods to elimi-nate α , β , and 1 /Q from the equations. (Cox et al., 1997, provide an excellent groundingon this computational algebra.) However, the 3 parameters α , β , Q of this affiliationmodel are, in fact, identifiable. Indeed such calculations show that the formulas for m , m , and m alone imply the following. Proposition 10. The number of node groups, Q , in a random graph affiliation modelwith binary edge state variables and uniform group priors can be identified from the oments m , m , and m by Q = − m − m − m m + 3 m m − m m + 4 m m + 4 m m ( m − m ) . Note that, replacing the moments with empirical estimators, this formula could beused for estimation of Q .Of course once the formula in Proposition 10 is given, it can be most easily verifiedby expressing the moments in terms of parameters, and simplifying. Note that thedenominator here does not vanish, as may be seen in two different ways: either byLemma 20 in Section 5.3, or by checking that that m − m = ( α − β ) ( Q − Q (cid:54) = 0 . Once Q is identified by this formula, since we are assuming π q = 1 /Q , Corollary 7applies so that α and β are identifiable as well. Thus we have shown the following. Corollary 11. The parameters α , β , and Q of the random graph affiliation mixturemodel with binary edge state variables and uniform groups priors ( π q = 1 /Q ) are identi-fiable from the distribution of K . 4. Weighted random graphs In the parametric case, where F ql has parametric form F ( · , θ ql ), we can uniquelyidentify the connectivity parameters under very general conditions by considering thedistribution of K only. Indeed, each triplet ( X ij , X ik , X jk ) follows a mixture of Q distributions, each with three variates, comprising • Q terms of the form µ qq ( X ij ) µ qq ( X ik ) µ qq ( X jk ), each with prior π q , where 1 ≤ q ≤ Q , • Q ( Q − 1) terms of the form µ qq ( X ij ) µ ql ( X ik ) µ ql ( X jk ) (permuting i, j and k ), eachwith prior π q π l , with distinct q, l ∈ { , , . . . , Q } , • Q ( Q − Q − 2) terms of the form µ ql ( X ij ) µ qm ( X ik ) µ lm ( X jk ), each with prior π q π l π m , with distinct q, l, m ∈ { , , . . . , Q } .By an old result due to Teicher (1967), the identifiability of finite mixtures of somefamily of distributions is equivalent to identifiability of finite mixtures of (multivariate)product distributions from this same family. In addition, identifiability of continuousunivariate parametric mixtures is generally well understood (Teicher, 1961, 1963). Thus,we introduce the following assumptions. Assumption 1. The Q ( Q + 1) / parameter values θ ql , ≤ q ≤ l ≤ Q are distinct. Assumption 2. The family of measures M = { F ( · , θ ) | θ ∈ Θ } satisfiesi) all elements F ( · , θ ) have no point mass at , i) the parameters of finite mixtures of measures in M are identifiable, up to labelswapping. In other words, for any integer m ≥ ,if m (cid:88) i =1 α i F ( · , θ i ) = m (cid:88) i =1 α (cid:48) i F ( · , θ (cid:48) i ) then m (cid:88) i =1 α i δ θ i = m (cid:88) i =1 α (cid:48) i δ θ (cid:48) i , where δ θ denotes the Dirac mass at θ . Remark. Note that most of the classical parametric families satisfy this assumption. Inparticular, the truncated Poisson, Gaussian and Laplace families { f ( · , θ ) , θ ∈ R p } satisfyAssumption 2 (see e.g., McLachlan and Peel, 2000; Teicher, 1961, 1963). Theorem 12. Under Assumptions 1 and 2, the parameters π , θ ql , p ql , ≤ q ≤ l ≤ Q ofthe parametric random graph mixture model with weighted edge variables are identifiable,up to label swapping, from the distribution of K . The previous result is not applicable to the parametric affiliation model, for whichthe set { θ ql , ≤ q ≤ l ≤ Q } reduces to { θ in , θ out } , so Assumption 1 is violated. However,in this case a similar argument again yields a full identifiability result. As suggested byProposition 9, we use Q nodes to identify the group priors. Theorem 13. Under Assumption 2, the parameters α, β, θ in , θ out of the parametric af-filiation random graph mixture model with weighted edge variables are strictly identifiablefrom the distribution of K provided θ in (cid:54) = θ out . Once these have been identified, thegroup priors π can further be identified, up to label swapping, from the distribution of K Q . A similar approach to that of this theorem has been successfully used by Ambroiseand Matias (2010) to estimate the parameters of these models. They first estimated thesparsity parameters from the induced binary edge state model, but a procedure basedon the preceding theorems would not require that these be distinct.We turn next to models with a finite number, κ , of edge weights. Our primary reasonfor investigating such models is the role they play in our analysis of models with non-parametric conditional distributions of edge weights, in Section 4.2. Thus we limit ourinvestigation to the single result we need there. Theorem 14. The parameters of the random graph mixture model, with κ -state edgevariables and Q ≥ latent groups, are identifiable, up to label swapping, from the dis-tribution of K , provided κ ≥ (cid:0) Q +12 (cid:1) and the κ -entry vectors { p ql } ≤ q ≤ l ≤ Q are linearlyindependent. Note that the condition given here on the number of edge states is likely far fromoptimal. In case Q = 2 the condition requires at least κ = 3 edge states whereas weknow from Theorem 1 that the parameters are identifiable for this Q with only κ = 2edge states. In the most general case of non-parametric distributions, our arguments for identifia-bility depend on binning the values of the edge variables into a finite set. We then applyTheorem 14 to this discretization, to obtain the following.12 heorem 15. The parameters { π q , µ ql = (1 − p ql ) δ + p ql F ql : 1 ≤ q, l ≤ Q } of the randomgraph weighted non-parametric mixture model are identifiable, up to label swapping, fromthe distribution of K provided the measures µ ql , ≤ q ≤ l ≤ Q are linearly independent. 5. Proofs In this section we review Kruskal’s theorem and describe our technique for employingit in the proofs of Theorems 2 and 14. Kruskal’s result. We first present Kruskal’s result in a statistical context. Consider alatent random variable V with state space { , . . . , r } and distribution given by the columnvector v = ( v , . . . , v r ). Assume that there are three observable random variables U j for j = 1 , , 3, each with finite state space { , . . . , κ j } . The U j s are moreover assumed tobe independent conditional on V . Let M j , j = 1 , , r × κ j whose i th row is m ji = P ( U j = · | V = i ). Then consider the κ × κ × κ tensor[ v ; M , M , M ] defined by[ v ; M , M , M ] = r (cid:88) i =1 v i m i ⊗ m i ⊗ m i . Thus [ v ; M , M , M ] is a 3-dimensional array whose ( s, t, u ) element is[ v ; M , M , M ] s,t,u = r (cid:88) i =1 v i m i ( s ) m i ( t ) m i ( u ) = P ( U = s, U = t, U = u ) , for any 1 ≤ s ≤ κ , ≤ t ≤ κ , ≤ u ≤ κ . Note that [ v ; M , M , M ] is left unchangedby simultaneously permuting the rows of all the M j and the entries of v , as this corre-sponds to permuting the labels of the latent classes. Knowledge of the distribution of( U , U , U ) is equivalent to knowledge of the tensor [ v ; M , M , M ].To state Kruskal’s result, we need some algebraic terminology. For a matrix M , the Kruskal rank of M will mean the largest number I such that every set of I rows of M areindependent. Note that this concept would change if we replaced “row” by “column,”but we only use the row version in this article. With the Kruskal rank of M denoted byrank K M , we have rank K M ≤ rank M, and equality of rank and Kruskal rank does not hold in general. However, in the particularcase when a matrix M of size p × q has rank p , it also has Kruskal rank p .The fundamental algebraic result of Kruskal is the following. Theorem 16. (Kruskal, 1976, 1977), (see also Rhodes, 2010) Let I j = rank K M j . If I + I + I ≥ r + 2 , (4) then [ v ; M , M , M ] uniquely determines v and the M j , up to simultaneous permutationof the rows. In other words, the set of parameters { ( v , P ( U j = · | V )) } is uniquelyidentified, up to label swapping, from the distribution of the random variables ( U , U , U ) . M j ,provided the κ j are large enough to allow it. More precisely, Kruskal’s condition on thesum of Kruskal ranks can be expressed through a Boolean combination of polynomialinequalities ( (cid:54) =) involving matrix minors in the parameters. If we show there is even asingle choice of parameters for which Kruskal’s condition is satisfied, then the algebraicvariety of parameters for which it does not hold is a proper subvariety (defined by negat-ing the polynomial condition above, and so by a Boolean combination of equalities) ofparameter space. As proper subvarieties are necessarily of Lebesgue measure zero, itfollows that the Kruskal condition holds generically.Our proof strategy for showing identifiability of certain random graph mixture modelsis to embed them in the model we just described. Applying Kruskal’s result to theembedded model, we derive partial identifiability results on the embedded model, andthen, using details of the embedding, relate these to the original model. Embedding the random graph mixture model into Kruskal’s context. Let κ denote thecardinality of X , in either the binary state case or the general finite state case.To place the random graph mixture model in the context of Theorem 16, we definea composite hidden variable and three composite observed variables that reflect theconditional independence structure integral to Kruskal’s theorem.For some n (to be determined), let V = ( Z , Z , . . . , Z n ) be the latent random vari-able, with state space { , . . . , Q } n , which describes the state of all n nodes collectively,and denote by v the corresponding vector of its probability distribution. Note that theentries of v are of the form π n · · · π n Q Q with n q ≥ (cid:80) n q = n .The observed variables will correspond to three pairwise disjoint subsets G , G , G of the complete set of edges K n . By choosing the G i to have no edges in common, weensure their conditional independence.The construction of the set of edges G i proceeds in two steps. We begin by consideringa small complete graph, and an associated matrix: For a subset of m nodes, we definea Q m × κ ( m ) matrix A , with rows indexed by assignments I ∈ { , . . . , Q } m of statesto these m nodes, columns indexed by the state space of the complete set of (cid:0) m (cid:1) edgesbetween them, and entries giving the probability of observing the specified states on alledges, conditioned on the specified node states. In the case κ = 2, it is helpful to notethat each column index corresponds to a different graph on the m nodes, composed ofthose edges assigned state 1. For larger κ one may similarly associate to a column indexa κ -coloring of the edges of the complete graph. We therefore refer to a column index asa configuration .In the step we call the base case , we exhibit a value of m such that this matrix A generically has full row rank.Then, an extension step builds on the base case, in order to construct a larger set of n nodes which will be used in the application of Theorem 16. This is accomplished bymeans of (Allman et al., 2009, Lemma 16, and subsequent remark) which we paraphraseas follows. 14 emma 17. Suppose for the Q -node-state model, the number of nodes m is such that the Q m × κ ( m ) matrix A of probabilities of observing configurations of K m conditioned on nodestate assignments has rank Q m . Then with n = m there exist pairwise disjoint subsets G , G , G of the complete set of edges K n such that for each G i the Q n × κ | G i | matrix M i of probabilities of observing configurations of G i conditioned on node state assignmentshas rank Q n . In our applications here, we only determine that A has full row rank generically.Hence the Lemma only allows us to conclude that the M i have full row rank generically,and hence have Kruskal rank Q n generically.We also note (for use in the proof of Theorems 2 and 14) that in the constructionof the Lemma, each subset G j is the union of m complete sets of edges each over m different nodes, and thus contains m (cid:0) m (cid:1) edges. In particular, if m ≥ 3, then G i containsa complete graph on 3 nodes. Application of Kruskal’s theorem to the embedded model and conclusion. Next, with v , M , M , M defined by the embedding given in the previous paragraphs, we applyKruskal’s Theorem (Theorem 16) to the table [ v ; M , M , M ]. Knowledge of the dis-tribution of the random graph mixture model over n nodes implies knowledge of this3-dimensional table. By our construction of the M i , condition (4) is satisfied since3 Q n ≥ Q n + 2. Thus the vector v and the matrices M , M , M are uniquely deter-mined, up to simultaneous permutation of the rows.With these embedded parameters in hand, it is still necessary to recover the initialparameters of the random graph mixture model: the group proportions and the connec-tivity vectors. As this requires a rather detailed argument, we leave its exposition for aspecific application.Finally, we note that by discretizing continuous variables, this approach to establish-ing identifiability may also be used in the case of continuous connectivity distributions. This proof follows the strategy described in the previous section. We use the notation p ql = P ( X ij = 1 | Z i = q, Z j = l ) = 1 − ¯ p ql . Base case. The initial step consists in finding a value of m such that the matrix A ofsize Q m × m ) containing the probabilities of the configurations over these m nodes,conditional on the hidden node states, generically has full row rank.The condition of having full row rank can be expressed as the non-vanishing of at leastone Q m × Q m minor of A . Composing the map sending { p ql } → A with this collection ofminors gives polynomials in the parameters of the model. To see that these polynomialsare not identically zero, and thus are non-zero for generic parameters, it is enough toexhibit a single choice of the { p ql } for which the corresponding matrix A has full rowrank.With this in mind, we choose to consider { p ql } of the form p ql = s q s l / ( s q s l + t q t l ), so¯ p ql = t q t l / ( s q s l + t q t l ), with s i , t j > A ,and all entries of A are monomials with total degree (cid:0) m (cid:1) in { p ql , ¯ p ql } , we may simplify15he entries of A by removing denominators, and consider the matrix (also called A ) withentries in terms of p ql = s q s l and ¯ p ql = t q t l .The rows of A are indexed by the composite node states I ∈ { , . . . , Q } m , while itscolumns are indexed by the edge configurations { , } ( m ). For any composite hiddenstate I ∈ { , . . . , Q } m and any vertex v ∈ { , . . . , m } , let I ( v ) ∈ { , . . . , Q } denote thestate of vertex v in the composite state I . With our particular choice of the parameters p ql , the ( I , ( x ij ) ≤ i 2) and (3 , 4) in state1, and that with edges (1 , 3) and (2 , 4) in state 1, both have degree sequence (1 , , , m > 3, there will be several identical columns in A . For any degree sequence d = ( d v ) ≤ v ≤ m arising from an m -node graph, let A d denote a corresponding column of A . Now, for each vertex v ∈ { , . . . , m } and each q ∈ { , . . . , Q } , introduce an indetermi-nate U v,q and a Q m -entry row vector U = ( (cid:81) ≤ v ≤ m U v, I ( v ) ) I∈{ ,...,Q } m . For each degreesequence d , we have U A d = (cid:88) I∈{ ,...,Q } m (cid:89) ≤ v ≤ m s d v I ( v ) t m − − d v I ( v ) U v, I ( v ) = (cid:89) ≤ v ≤ m (cid:16) s d v t m − − d v U v, + · · · + s d v Q t m − − d v Q U v,Q (cid:17) . To verify this, notice that each monomial ( s d i t m − − d i U ,i ) · · · ( s d m i m t m − − d m i m U m,i m ) ob-tained from multiplying out the product on the right corresponds to a choice of nodestates i v for nodes v , and hence a vector I = ( i , . . . , i m ). Moreover, we obtain one suchsummand for each I .In order to prove that the matrix A has full row rank, it is enough to exhibit Q m independent columns of A . Note, however, that independence of a set of columns { A d } isequivalent to the independence of the corresponding set of polynomial functions { U A d } in the indeterminates { U v,q } .Now for a set D of degree sequences, to prove that the polynomials { U A d } d ∈D areindependent, we assume that there exist scalars a d such that (cid:88) d ∈D a d U A d ≡ , (5)and show that necessarily all a d = 0. To this aim, we prove the following lemma. Lemma 18. Suppose Q ≤ m . Let D be a set of degree sequences such that for each node v ∈ { , . . . , m } , the set of degrees { d v | d ∈ D} has cardinality at most Q . Then for eneric values of s i , t j , for each v and each d (cid:63) ∈ { d v | d ∈ D} there exist values of theindeterminates { U v,q } ≤ q ≤ Q that annihilate all the polynomials U A d for d ∈ D exceptthose for which d v = d (cid:63) .Proof. Fix a node v and let { d , . . . , d Q } be any set of Q distinct integers with { d v | d ∈ D} ⊆ { d , . . . , d Q } ⊆ { , , . . . , m − } . Let M be the Q × Q matrix with i th row ( s d i t m − − d i , . . . , s d i Q t m − − d i Q ). Since allthe integers d i are different, the matrix M has full row rank for generic choices of s i , t j .(One way to see this is to consider a m × m Vandermonde matrix, with ( k, l )-entry ( u l ) k .Choosing distinct values of u l this has full rank, and thus the Q × m submatrix composedof rows with indices { d i } has rank Q . But then Q of the columns can be chosen so thatthe Q × Q submatrix has full rank. Letting the s i be the values of u l in these columns,and t j = 1, gives one choice for which the matrix M has full rank.)Note d (cid:63) = d k for some k , and let e k be the Q -entry vector of all zeros except for a 1in the k th position. Then for generic s i , t j , the equation M ( U v, , . . . , U v,Q ) T = e k admits a unique solution, one that corresponds to the above-mentioned choice of inde-terminates { U v,q } ≤ q ≤ Q .Now consider the following collection D = (cid:110) ( d , . . . , d m ) | d v ∈ { , , . . . , Q } for v ≤ m − 1, and if m − (cid:88) v =1 d v is eventhen d m ∈ { , , , . . . , Q − } , otherwise d m ∈ { , , , . . . , Q − } (cid:111) . Note that D has Q m elements and satisfies the assumption of Lemma 18 on the numberof different values per coordinate. Moreover, if we establish, as we do below, that itselements are realizable as degree sequences of graphs over m nodes, then by choosingone column of A associated to each degree sequence in D , we obtain a collection of Q m different columns of A . These columns are independent since for each sequence d (cid:63) ∈ D by Lemma 18 we can choose values of the indeterminates { U v,q } ≤ v ≤ m, ≤ q ≤ Q such thatall polynomials U A d vanish, except U A d (cid:63) , leading to a d (cid:63) = 0 in equation (5).That each sequence d ∈ D is realizable as a degree sequence of a graph over m nodesfollows from a result of Erd˝os and Gallai (1961) (see also Berge, 1976, Chapter 6, Theorem6). Reordering the entries of d so that d ≥ d ≥ . . . ≥ d m , a necessary and sufficientcondition for a sequence to be realizable by such a graph is that for 1 ≤ k ≤ m − k (cid:88) v =1 d v ≤ k ( k − 1) + m (cid:88) v = k +1 min { k, d v } . (6)From the definition of d ∈ D , with coordinates reordered, it is easy to see that for any1 ≤ k ≤ m − 1, we have k (cid:88) v =1 d v ≤ ( k − Q + (2 Q − 1) and m (cid:88) v = k +1 min { k, d v } ≥ m − k. ≤ k ≤ m − 1, we have − k + ( Q + 2) k + Q − ≤ m. But for m sufficently largemax ≤ k ≤ m − {− k + ( Q + 2) k } = (cid:16) Q +22 (cid:17) if Q is even, ( Q +1)( Q +3)4 if Q is odd.Thus, inequality (6) is satisfied as soon as (cid:40) m ≥ Q − (cid:0) Q +22 (cid:1) if Q is even ,m ≥ Q − ( Q +1)( Q +3)4 if Q is odd . This concludes the proof of the base case.The extension step explained in Section 5.1 then applies, so that with n = m ,Kruskal’s Theorem may be applied to identify, up to simultaneous row permutation, v , M , M , and M as defined in that section. Conclusion. The entries of v obtained via Kruskal’s theorem applied to the embeddedmodel are of the form π n · · · π n Q Q with (cid:80) n q = n , while the entries of the M i containinformation on the p ql . Although the ordering of the rows of the M i is arbitrary, cruciallywe do know how the rows of M i are paired with the entries of v .By focusing on one of the matrices, say M , and adding appropriate columns tomarginalize to a single edge variable ( e.g. , all columns for configurations with x = 1),we recover the set of values { p ql } ≤ q ≤ l ≤ Q , but without order. However, if row k of M corresponds to the unknown node states I , then performing such marginalizations foreach of the 3 edges of a complete graph C on 3 nodes contained in G recovers the set R k = { p ql | for some edge ( v, w ) ∈ C , {I ( v ) , I ( w ) } = { q, l } } . By considering the cardinalities of the sets R k in the generic case of all p ql distinct, wecan now determine individual parameters.Consider first those k for which R k has one element. There are exactly Q of these,arising from all 3 nodes being in the same group. Thus for such k , R k = { p qq } and v k = π nq . Choosing an arbitrary labeling, we have determined all π q and p qq .Next consider those k for which the R k has two elements. These arise from 2 nodesbeing in the same group, with the other node in a different group, so R k = { p qq , p ql } forsome l (cid:54) = q . However, having already determined the p qq and since generically the p ql are distinct, we can find exactly two such k and k of the form R k = { p qq , p ql } and R k = { p ll , p ql } . Thus, we can also determine p ql for q (cid:54) = l .Finally, note that all generic aspects of this argument, in the base case and therequirement that the parameters p ql be distinct, concern only the p ql . Thus if the groupproportions π q are fixed to any specific values, the theorem remains valid.18 .3. Proofs relying on moment equationsProof of Proposition 3. Focusing on Q + 1 nodes, let Z = ( Z , . . . , Z Q +1 ) denote thecomposite node random variable, and z = ( z , . . . , z Q +1 ) any realization of Z . Note that U Q ( X ) = (cid:88) z ∈{ ,...,Q } Q +1 (cid:89) ≤ k ≤ Q +1 π z k E (cid:89) ≤ i Since α is a real root of the cubic polynomial U ( X ), to show α isuniquely identifiable it is enough to show that ddX U ( X ) ≥ 0. But ddX U ( X ) = 3 X − m X + 3 m = 3 (cid:0) ( X − m ) + ( m − m ) (cid:1) . But m − m ≥ m = E ( X ij X ik ) = E [ E ( X ij | Z i ) E ( X ik | Z i )]= E [ E ( X ij | Z i ) ] ≥ [ E ( E ( X ij | Z i ))] = m . With α identified, since α (cid:54) = β , we may uniquely recover β as the root of the linearpolynomial V ( α, Y ) with nonzero leading coefficient. Proof of Theorem 6. Using equation (1) to eliminate α from equations (3) and (2) re-spectively, gives two equations R ( β ) = aβ + bβ + cβ + d = 0 ,S ( β ) = Aβ + Bβ + C = 0 , where a = − s + 3 s s − s ,b = 3 m ( s − s s + s ) ,c = 3 m s ( s − ,d = m s − m s , and A = s − s ,B = − m ( s − s ) ,C = m s − m s . To understand the degrees of these polynomials we need the following.20 emma 19. Suppose π ∈ [0 , Q with (cid:80) Qq =1 π q = 1 .i) If π q > for at least two values of q , then a (cid:54) = 0 .ii) A = 0 if, and only if, π is uniform on its support.Proof. To establish claim i ), first observe that 0 < s < 1. Moreover, since s ≤ s s by the Cauchy-Schwarz inequality, and s < s by comparing terms (since at least two π q > s < s / . If − s + 3 s s − s = 0, then s / > s = 2 s s − , where the denominator must be positive. Thus1 > s / s − , so 0 > s / − s + 1 . However, the function x (cid:55)→ x / − x + 1 is positive on (0 , ii ), we have A = s − s and by the Cauchy-Schwarz inequality, s =( (cid:80) q π / q π / q ) ≤ s , with equality if, and only if, ( π / , . . . , π / Q ) = λ ( π / , . . . , π / Q )for some value λ ∈ R . This can only occur if on its support π is uniform.Returning to the proof of Theorem 6, if π is not uniform, we thus have A (cid:54) = 0and dividing the polynomial R ( β ) by S ( β ) produces a linear remainder T ( β ), which iscalculated to be T ( β ) = s s − s (cid:2) ( m − m )( s − s s + 2 s ) β +( s − s s ) m + ( s − s ) m m + ( s s − s ) m (cid:3) . Since any common zero of R ( β ) and S ( β ) must also be a zero of T ( β ), we can recoverthe parameters β and α via the rational formulas β = ( s − s s ) m + ( s − s ) m m + ( s s − s ) m ( m − m )(2 s − s s + s ) , (7) α = m + ( s − βs . (8)Note that a calculation shows m − m = ( α − β ) ( s − s ) , (9)which, since A (cid:54) = 0, is only zero in the trivial case of α = β . Otherwise, since 2 s − s s + s = − a (cid:54) = 0 by part i ) of Lemma 19, the formulas (7) and (8) are valid.Equation (9), together with part ii ) of Lemma 19 further shows that if m (cid:54) = m ,then π is not uniform. 21f m = m , then π is uniform, and S ( β ) is identically zero. However, in this casethe coefficients of ˜ R ( β ) = Q − Q R ( β ) = β + ˜ bβ + ˜ cβ + ˜ d simplify to ˜ b = − m , ˜ c = 3 m , ˜ d = Qm − m − Q = − m + m − m − Q . Thus ˜ R ( β ) = ( β − m ) + m − m − Q , which has a unique real root β = m + (cid:18) m − m Q − (cid:19) / . The parameter α can then be found by formula (8). Proof of Proposition 9. First, note that the distribution of K n may be parameterizedusing the elementary symmetric polynomials σ i evaluated at the { π q } ≤ q ≤ Q , instead ofthe values { π q } ≤ q ≤ Q . Indeed, the affiliation model distribution only involves the π q sthrough the symmetric expressions (cid:88) q ,...,qs,qi (cid:54) = qj π i q . . . π i s q s , with s ≤ Q and (cid:80) k ≤ s i k = n , and these sums may be expressed as polynomials in the { σ i ( π , . . . , π Q ) } ≤ i ≤ n . Thus for identifiability of the { π q } from the distribution of K n ,it is necessary that the { π q } be identifiable from the { σ i ( π , . . . , π Q ) } ≤ i ≤ n . Note alsothat σ ( π , . . . , π Q ) = (cid:80) Qq =1 π i = 1 carries no information on the π q s that is not alreadyknown.Now if n < Q , identifying Q − π q from the values of n − π q is impossible. Lemma 20. For the random graph affiliation model on Q nodes, with binary edge statevariables, uniform group priors, and connectivities α (cid:54) = β , the moment inequality m >m holds.Proof. Note m = E [ E ( X X | Z , Z ) E ( X X | Z , Z )] = E [ E ( X X | Z , Z ) ] ≥ ( E [ E ( X X | Z , Z )]) = m . However, equality occurs above only if E ( X X | Z , Z ) is constant. But E ( X X | Z = i = Z ) = 1 Q α + Q − Q β , ( X X | Z = i (cid:54) = j = Z ) = 2 Q αβ + Q − Q β , so the difference of these expectations is ( α − β ) /Q (cid:54) = 0. Thus m > m .A similar argument that m ≥ m was given in the proof of Theorem 4, so the claimis established. With ¯ p q(cid:96) = 1 − p q(cid:96) , the distribution of ( X ij , X ik , X jk ) is given bythe mixture (cid:88) ≤ q,(cid:96),m ≤ Q π q π (cid:96) π m [¯ p q(cid:96) δ ( X ij ) + p q(cid:96) F ( X ij , θ q(cid:96) )] × [¯ p qm δ ( X ik ) + p qm F ( X ik , θ qm )] × [¯ p (cid:96)m δ ( X jk ) + p (cid:96)m F ( X jk , θ (cid:96)m )] . (10)Since the distributions F ( · , θ ) have no point masses at 0 by Assumption 2, the family M ∪ { δ } has identifiable parameters for finite mixtures, so Theorem 1 of Teicher (1967)applies to it. Thus multiplying out the terms of the mixture in (10) to view it as amixture of products from M ∪ { δ } , and noting that by Assumption 1 certain of thecomponents arise from unique choices of q, (cid:96), m we can identify the terms of the form π q π (cid:96) π m p q(cid:96) p qm p (cid:96)m F ( X ij , θ q(cid:96) ) F ( X ik , θ qm ) F ( X jk , θ (cid:96)m ) , and the vectors in C = { ( π q π (cid:96) π m p q(cid:96) p qm p (cid:96)m ; θ q(cid:96) , θ qm , θ (cid:96)m ) | ≤ q, (cid:96), m ≤ Q } , but only as an unordered set. But by Assumption 1, there are only Q vectors in this setfor which the last entries ( θ q(cid:96) , θ qm , θ (cid:96)m ) are all equal. Indeed, these entries are of theform ( θ qq , θ qq , θ qq ) for some 1 ≤ q ≤ Q , since the case where these entries would be ofthe form ( θ q(cid:96) , θ q(cid:96) , θ q(cid:96) ) for some q (cid:54) = (cid:96) is not possible. Thus the θ qq for 1 ≤ q ≤ Q may beidentified as well as the corresponding weights ( π q p qq ) , or equivalently the values π q p qq .Now, among the vectors in C , exactly 3 Q ( Q − 1) of them have two of the last threeentries equal. These entries are, up to order, of the form ( θ qq , θ q(cid:96) , θ q(cid:96) ), for any q (cid:54) = (cid:96) . Thus we obtain the set { ( π q π (cid:96) p q(cid:96) p qq ; θ qq , θ q(cid:96) , θ q(cid:96) ) } ≤ q<(cid:96) ≤ Q , without regard to order.Since we already identified the pairs ( π q p qq , θ qq ), we may take the ratio between theweights π q π (cid:96) p q(cid:96) p qq and π q p qq to recover the values π q π (cid:96) p q(cid:96) . Thus we identify the set { ( π q π (cid:96) p q(cid:96) ; θ qq , θ q(cid:96) , θ q(cid:96) ) } ≤ q<(cid:96) ≤ Q .Among these vectors, we can match the ones whose two last entries are equal, namelythose of the form ( π q π (cid:96) p q(cid:96) ; θ qq , θ q(cid:96) , θ q(cid:96) ) with ( π q π (cid:96) p q(cid:96) ; θ (cid:96)(cid:96) , θ q(cid:96) , θ q(cid:96) ). This enables us torecover the values θ q(cid:96) , for 1 ≤ q, (cid:96) ≤ Q .By marginalizing the distribution of ( X ij , X ik , X jk ), we also have the distribution ofa single edge variable X ij , (cid:88) ≤ q,(cid:96) ≤ Q π q π (cid:96) [¯ p q(cid:96) δ ( X ij ) + p q(cid:96) F ( X ij , θ q(cid:96) )] . (11)and thus by our hypotheses can also identify { ( π q π (cid:96) p q(cid:96) , θ q(cid:96) ) } ≤ q ≤ (cid:96) ≤ Q , without order. Butas the θ q(cid:96) have already been identified, we may use this to match π q π (cid:96) p q(cid:96) with π q π (cid:96) p q(cid:96) and thus recover p q(cid:96) from the ratio. From π q p qq and p qq we can then recover π q .23hus, all parameters of the model are identified, up to permutation on the grouplabels. Proof of Theorem 13. From the distribution of K , we can distinguish ( α, θ in ) from( β, θ out ) as follows: The distribution of K is the mixture of either 4 (when Q = 2)or 5 (when Q ≥ 3) different 3-dimensional components. Since the distributions F ( · , θ )do not have point masses at 0 by Assumption 2, we can identify from this mixture thatpart with no such Dirac masses in it, which is the mixture α (cid:16) Q (cid:88) q =1 π q (cid:17) F ( · , θ in ) ⊗ F ( · , θ in ) ⊗ F ( · , θ in )+ αβ (cid:16) (cid:88) ≤ q (cid:54) = (cid:96) ≤ Q π q π (cid:96) (cid:17) F ( · , θ in ) ⊗ F ( · , θ out ) ⊗ F ( · , θ out )+ αβ (cid:16) (cid:88) ≤ q (cid:54) = (cid:96) ≤ Q π q π (cid:96) (cid:17) F ( · , θ out ) ⊗ F ( · , θ in ) ⊗ F ( · , θ out )+ αβ (cid:16) (cid:88) ≤ q (cid:54) = (cid:96) ≤ Q π q π (cid:96) (cid:17) F ( · , θ out ) ⊗ F ( · , θ out ) ⊗ F ( · , θ in )+ β (cid:16) (cid:88) q,(cid:96),m distinct π q π (cid:96) π m (cid:17) F ( · , θ out ) ⊗ F ( · , θ out ) ⊗ F ( · , θ out ) , where the last term appears only when Q ≥ F in each coordinate. The three remaining terms have twocoordinates which are equal, involving θ out , and one different, involving θ in . Thus we candistinguish between θ in and θ out .We may also determine α ( (cid:80) q π q ) as the weight of F ( · , θ in ) ⊗ F ( · , θ in ) ⊗ F ( · , θ in ).Similarly from the δ ⊗ F ( · , θ in ) ⊗ F ( · , θ in ) term in the full mixture, we may recoverthe weight (1 − α ) α ( (cid:80) q π q ). Summing these two weights yields α ( (cid:80) q π q ), and thendividing the first by this, we recover α .The parameter β is similarly recovered from the weights of F ( · , θ out ) ⊗ F ( · , θ out ) ⊗ F ( · , θ in ) and δ ⊗ F ( · , θ out ) ⊗ F ( · , θ in ).Next we consider the distribution of K n for various n . This is a mixture of manydifferent (cid:0) n (cid:1) -dimensional components. As above, we can identify up to label swapping thecomponents with no δ factors in this mixture. But as we already know the value of θ in ,we can identify the term ⊗ ≤ i Base case. We consider a subset E of the set of all edges over m vertices, with m and E to be chosen later. Let A be the Q m × κ |E| matrix containing the probabilities of theclumped random variable Y = ( X e ) e ∈E with state space { , . . . , κ } |E| , conditional on thehidden states of the m vertices.Let I ∈ { , . . . , Q } m be a vector specifying particular states of all the node variables.For each edge e ∈ E , the endpoints are in some set of hidden states { q, l } , which wedenote by I ( e ). The ( I , ( x e ) e ∈E )-entry of the matrix A is then given by (cid:89) e ∈E κ (cid:89) k =1 ( p I ( e ) ( k )) xe = k , where 1 A is the indicator function for a set A .For each edge e in the graph, we introduce κ indeterminates, t e, , . . . , t e,κ . We createa κ |E| -element column vector t indexed by the states of the clumped variable Y , whose( x e ) e ∈E -th entry is given by (cid:89) e ∈E κ (cid:89) k =1 t xe = k e,k . Then the I th entry of the Q m -entry vector A t is the polynomial function f I = (cid:88) ( x e ) e ∈E (cid:89) e ∈E κ (cid:89) k =1 { p I ( e ) ( k ) t e,k } xe = k = (cid:89) e ∈E (cid:16) p I ( e ) (1) t e, + · · · + p I ( e ) ( κ ) t e,κ (cid:17) . Independence of the rows of A is equivalent to the independence of the polynomials { f I } I∈{ ,...,Q } m . Thus, suppose that we have (cid:88) I a I f I ≡ , (12)and let us show then that every a I must be 0.For a specific e ∈ E , and any choice { q, l } with 1 ≤ q ≤ l ≤ Q , one can choose apoint t e, { q,l } = ( t e, , . . . , t e,κ ) ∈ R κ in the zero set of all the polynomial functions f I in(12), except those with I ( e ) = { q, l } . To see this, let M be the (cid:0) Q +12 (cid:1) × κ matrix whose { q, l } th row is given by the vector p ql = ( p ql (1) , . . . , p ql ( κ )). M has full row rank sinceits rows are independent by assumption. Thus there is a solution t e, { q,l } to M t e, { q,l } = e { q,l } , where e { q,l } is the vector of size (cid:0) Q +12 (cid:1) with zero entries, except the { q, l } th which isequal to 1. The independence assumption also implies κ ≥ (cid:0) Q +12 (cid:1) .Note that in this construction we have only specified group assignments to two nodesup to node permutation. Thus if the { q, l } row of M is related to an edge e = ( i, j )because I ( e ) = { q, l } , we may have that either i is in state q and j is in state l , or i is instate l and j is in state q . 25y evaluating the f I at t e, { q,l } for many edges e and choices of node states { q, l } ,we can annihilate all the polynomials f I except those satisfying specific constraints onthe node states. More precisely, we can make vanish all the f I except those for which I satisfies the condition that for some subset of edges E (cid:48) ⊆ E and some sequence ofunordered node assignments ( { q e , l e } ) e ∈E (cid:48) we have I ∈ (cid:92) e ∈E (cid:48) S ( e ; { q e , l e } ) , (13)where S ( e ; { q e , l e } ) = {I ∈ { , . . . , Q } m | I ( e ) = { q e , l e }} .To conclude that each a I = 0 in equation (12), it is enough to construct for every I ∈ { , . . . , Q } m a set as in (13) containing only I .In fact, this can be achieved with only m = 3 vertices and the full set of edges E = { (1 , , (1 , , (2 , } . Indeed, up to permutation of the nodes and of the labels ofthe groups, I can take only three different values, namely (1 , , , (1 , , 2) and (1 , , E (cid:48) = { (1 , , (2 , } , we get { (1 , , } = S ((1 , { , } ) ∩ S ((2 , { , } ) { (1 , , } = S ((1 , { , } ) ∩ S ((2 , { , } ) { (1 , , } = S ((1 , { , } ) ∩ S ((2 , { , } ) . Thus, we proved the following lemma. Lemma 21. With E the complete set of edges over m = 3 vertices, the Q × κ matrix A containing the probabilities of the clumped variable Y = ( X e ) e ∈E , conditional on thehidden states Z = ( Z , Z , Z ) ∈ { , . . . , Q } has full row rank Q , provided the κ -entryvectors { p ql } ≤ q ≤ l ≤ Q are linearly independent.Conclusion of the proof. The Lemma provides the base case, with the extension step ofSection 5.1 then applying. Thus with n = m = 9 nodes, Kruskal’s Theorem may beapplied to identify, up to simultaneous row permutation, v , M , M , and M as definedin that section.The rest of the proof follows the same lines as the conclusion in the proof of Theorem 2,replacing the numbers p ql by the vectors p ql and noting that these vectors are assumedto be linearly independent. For convenience, we present the argument assuming the state space of the µ ql is asubset of R . The more general situation of a multidimensional state space can be handledsimilarly, along the lines of the proof of Theorem 9 of Allman et al. (2009).Let M ql denote the c.d.f. of µ ql = (1 − p ql ) δ + p ql F ql . Since the measures { µ ql | ≤ q ≤ l ≤ Q } are assumed to be linearly independent, so are the functions { M ql | ≤ q ≤ l ≤ Q } . Applying Lemma 17 of Allman et al. (2009) to this set of functions, there existssome κ ∈ N and cutpoints u < u < · · · < u κ − such that the vectors { ( M ql ( u ) , M ql ( u ) , . . . , M ql ( u κ − ) , | ≤ q ≤ l ≤ Q } κ ≥ (cid:0) Q +12 (cid:1) . Also by adding additional cutpoints if necessary, andthereby increasing κ , we may assume that among the u i are any specific real numberswe like.The independence of the above vectors is equivalent to the independence of the vectors { ¯ M ql | ≤ q ≤ l ≤ Q } , where¯ M ql = ( M ql ( u ) , M ql ( u ) − M ql ( u ) , . . . , M ql ( u κ − ) − M ql ( u κ − ) , − M ql ( u κ − )) . Note that the k th entry of ¯ M ql is simply the probability that a variable with distribution µ ql takes values in the intervals I k = ( u k − , u k ] (with the convention that u = −∞ , u κ = ∞ ). To formalize this, let Y ij = κ (cid:88) k =1 k I k ( X ij )be the random variable with state space { , , . . . , κ } indicating the interval in which thevalue of X ij lies. Thus, conditional on Z i = q, Z j = l , the random variables X ij and Y ij have respective c.d.f.s M ql and ¯ M ql .Now from the distribution of the continuous random graph mixture model on K ,with edge variables ( X ij ) ≤ i 1. Since we may additionally determine M ql ( t ) for any real number t by including it as a cutpoint, M ql , and hence µ ql , is uniquelydetermined. 6. Acknowledgements The authors thank the Statistical and Applied Mathematical Sciences Institute fortheir support during residencies in which some of this work was undertaken. ESA andJAR also thank the Laboratoire Statistique et G´enome for its hospitality. JAR addition-ally thanks Universit´e d’´Evry Val d’Essonne for a Visiting Professorship during whichthis work was completed. ESA and JAR received support from the National ScienceFoundation, grant DMS 0714830, while CM has been supported by the French AgenceNationale de la Recherche under grant NeMo ANR-08-BLAN-0304-01. References Airoldi, E., Blei, D., Fienberg, S., Xing, E., 2008. Mixed-membership stochastic blockmodels. Journalof Machine Learning Research 9, 1981–2014.Allman, E., Matias, C., Rhodes, J., 2009. Identifiability of parameters in latent structure models withmany observed variables. Ann. Statist. 37 (6A), 3099–3132.Ambroise, C., Matias, C., 2010. New consistent and asymptotically normal estimators for random graphmixture models. Tech. rep., arXiv:1003.5165.Barrat, A., Barth´elemy, M., Pastor-Satorras, R., Vespignani, A., 2004. The architecture of complexweighted networks. PNAS 101 (11), 3747–3752. erge, C., 1976. Graphs and hypergraphs. Translated by Edward Minieka. 2nd rev. ed. North-HollandMathematical Library. Vol. 6. Amsterdam - Oxford: North- Holland Publishing Company; NewYork:American Elsevier Publishing.Carreira-Perpi˜n´an, M., Renals, S., 2000. Practical identifiability of finite mixtures of multivariateBernoulli distributions. Neural Comp. 12 (1), 141–152.Cox, D., Little, J., O’Shea, D., 1997. Ideals, varieties, and algorithms, 2nd Edition. Springer-Verlag,New York.Daudin, J.-J., Picard, F., Robin, S., 2008. A mixture model for random graphs. Statist. Comput. 18 (2),173–183.Daudin, J.-J., Pierre, L., Vacher, C., 2010. Model for heterogeneous random networks using continuouslatent variables and an application to a tree-fungus network. Biometrics, to appear.Erd˝os, P., Gallai, T., 1961. Graphs with points of prescribed degree. (Graphen mit Punktenvorgeschriebenen Grades.). Mat. Lapok 11, 264–274.Erd˝os, P., R´enyi, A., 1959. On random graphs. I. Publ. Math. Debrecen 6, 290–297.Frank, O., Harary, F., 1982. Cluster inference by using transitivity indices in empirical graphs. J. Amer.Statist. Assoc. 77 (380), 835–840.Gyllenberg, M., Koski, T., Reilink, E., Verlaan, M., 1994. Nonuniqueness in probabilistic numericalidentification of bacteria. J. Appl. Probab. 31 (2), 542–548.Handcock, M., Raftery, A., Tantrum, J., 2007. Model-based clustering for social networks. J. Roy. Statist.Soc. Ser. A 170 (2), 301–354.Holland, P., Laskey, K., Leinhardt, S., 1983. Stochastic blockmodels: some first steps. Social networks5, 109–137.Kruskal, J., 1976. More factors than subjects, tests and treatments: an indeterminacy theorem forcanonical decomposition and individual differences scaling. Psychometrika 41 (3), 281–293.Kruskal, J., 1977. Three-way arrays: rank and uniqueness of trilinear decompositions, with applicationto arithmetic complexity and statistics. Linear Algebra and Appl. 18 (2), 95–138.Latouche, P., Birmel´e, E., Ambroise, C., 2009. Overlapping stochastic block models. Tech. rep.,arXiv:0910.2098.Mariadassou, M., Robin, S., 2010. Uncovering latent structure in valued graphs: a variational approach.Annals of Applied Statistics, to appear.McLachlan, G., Peel, D., 2000. Finite mixture models. Wiley Series in Probability and Statistics: AppliedProbability and Statistics. Wiley-Interscience, New York.Newman, M. E. J., 2003. The structure and function of complex networks. SIAM Rev. 45 (2), 167–256(electronic).Newman, M. E. J., 2004. Analysis of weighted networks. Phys. Rev. E 70, 056131.Newman, M. E. J., Leicht, E. A., 2007. Mixture models and exploratory analysis in networks. PNAS104 (23), 9564–9569.Nowicki, K., Snijders, T., 2001. Estimation and prediction for stochastic blockstructures. J. Amer.Statist. Assoc. 96 (455), 1077–1087.Petrie, T., 1969. Probabilistic functions of finite state Markov chains. Ann. Math. Statist 40, 97–115.Picard, F., Miele, V., Daudin, J.-J., Cottret, L., Robin, S., 2009. Deciphering the connectivity structureof biological networks using MixNet. BMC Bioinformatics 10, 1–11.Rhodes, J., 2010. A concise proof of Kruskal’s theorem on tensor decomposition. Linear Algebra and itsApplications 432 (7), 1818–1824.Snijders, T., Nowicki, K., 1997. Estimation and prediction for stochastic blockmodels for graphs withlatent block structure. J. Classification 14 (1), 75–100.Tallberg, C., 2005. A Bayesian approach to modeling stochastic blockstructures with covariates. Journalof Mathematical Sociology 29 (1), 1–23.Teicher, H., 1961. Identifiability of mixtures. Ann. Math. Statist. 32, 244–248.Teicher, H., 1963. Identifiability of finite mixtures. Ann. Math. Statist. 34, 1265–1269.Teicher, H., 1967. Identifiability of mixtures of product measures. Ann. Math. Statist 38, 1300–1302.Tomasi, G., Bro, R., 2006. A comparison of algorithms for fitting the PARAFAC model. Comput. Statist.Data Anal. 50 (7), 1700–1734.White, H., Boorman, S., Breiger, R., 1976. Social structure from multiple networks i: Blockmodels ofroles and positions. American Journal of Sociology 81, 730–779.Zanghi, H., Ambroise, C., Miele, V., 2008. Fast online graph clustering via Erd˝os R´enyi mixture. PatternRecognition 41 (12), 3592–3599.Zanghi, H., Picard, F., Miele, V., Ambroise, C., 2010. Strategies for online inference of network mixture.Annals of Applied Statistics, to appear.erge, C., 1976. Graphs and hypergraphs. Translated by Edward Minieka. 2nd rev. ed. North-HollandMathematical Library. Vol. 6. Amsterdam - Oxford: North- Holland Publishing Company; NewYork:American Elsevier Publishing.Carreira-Perpi˜n´an, M., Renals, S., 2000. Practical identifiability of finite mixtures of multivariateBernoulli distributions. Neural Comp. 12 (1), 141–152.Cox, D., Little, J., O’Shea, D., 1997. Ideals, varieties, and algorithms, 2nd Edition. Springer-Verlag,New York.Daudin, J.-J., Picard, F., Robin, S., 2008. A mixture model for random graphs. Statist. Comput. 18 (2),173–183.Daudin, J.-J., Pierre, L., Vacher, C., 2010. Model for heterogeneous random networks using continuouslatent variables and an application to a tree-fungus network. Biometrics, to appear.Erd˝os, P., Gallai, T., 1961. Graphs with points of prescribed degree. (Graphen mit Punktenvorgeschriebenen Grades.). Mat. Lapok 11, 264–274.Erd˝os, P., R´enyi, A., 1959. On random graphs. I. Publ. Math. Debrecen 6, 290–297.Frank, O., Harary, F., 1982. Cluster inference by using transitivity indices in empirical graphs. J. Amer.Statist. Assoc. 77 (380), 835–840.Gyllenberg, M., Koski, T., Reilink, E., Verlaan, M., 1994. Nonuniqueness in probabilistic numericalidentification of bacteria. J. Appl. Probab. 31 (2), 542–548.Handcock, M., Raftery, A., Tantrum, J., 2007. Model-based clustering for social networks. J. Roy. Statist.Soc. Ser. A 170 (2), 301–354.Holland, P., Laskey, K., Leinhardt, S., 1983. Stochastic blockmodels: some first steps. Social networks5, 109–137.Kruskal, J., 1976. More factors than subjects, tests and treatments: an indeterminacy theorem forcanonical decomposition and individual differences scaling. Psychometrika 41 (3), 281–293.Kruskal, J., 1977. Three-way arrays: rank and uniqueness of trilinear decompositions, with applicationto arithmetic complexity and statistics. Linear Algebra and Appl. 18 (2), 95–138.Latouche, P., Birmel´e, E., Ambroise, C., 2009. Overlapping stochastic block models. Tech. rep.,arXiv:0910.2098.Mariadassou, M., Robin, S., 2010. Uncovering latent structure in valued graphs: a variational approach.Annals of Applied Statistics, to appear.McLachlan, G., Peel, D., 2000. Finite mixture models. Wiley Series in Probability and Statistics: AppliedProbability and Statistics. Wiley-Interscience, New York.Newman, M. E. J., 2003. The structure and function of complex networks. SIAM Rev. 45 (2), 167–256(electronic).Newman, M. E. J., 2004. Analysis of weighted networks. Phys. Rev. E 70, 056131.Newman, M. E. J., Leicht, E. A., 2007. Mixture models and exploratory analysis in networks. PNAS104 (23), 9564–9569.Nowicki, K., Snijders, T., 2001. Estimation and prediction for stochastic blockstructures. J. Amer.Statist. Assoc. 96 (455), 1077–1087.Petrie, T., 1969. Probabilistic functions of finite state Markov chains. Ann. Math. Statist 40, 97–115.Picard, F., Miele, V., Daudin, J.-J., Cottret, L., Robin, S., 2009. Deciphering the connectivity structureof biological networks using MixNet. BMC Bioinformatics 10, 1–11.Rhodes, J., 2010. A concise proof of Kruskal’s theorem on tensor decomposition. Linear Algebra and itsApplications 432 (7), 1818–1824.Snijders, T., Nowicki, K., 1997. Estimation and prediction for stochastic blockmodels for graphs withlatent block structure. J. Classification 14 (1), 75–100.Tallberg, C., 2005. A Bayesian approach to modeling stochastic blockstructures with covariates. Journalof Mathematical Sociology 29 (1), 1–23.Teicher, H., 1961. Identifiability of mixtures. Ann. Math. Statist. 32, 244–248.Teicher, H., 1963. Identifiability of finite mixtures. Ann. Math. Statist. 34, 1265–1269.Teicher, H., 1967. Identifiability of mixtures of product measures. Ann. Math. Statist 38, 1300–1302.Tomasi, G., Bro, R., 2006. A comparison of algorithms for fitting the PARAFAC model. Comput. Statist.Data Anal. 50 (7), 1700–1734.White, H., Boorman, S., Breiger, R., 1976. Social structure from multiple networks i: Blockmodels ofroles and positions. American Journal of Sociology 81, 730–779.Zanghi, H., Ambroise, C., Miele, V., 2008. Fast online graph clustering via Erd˝os R´enyi mixture. PatternRecognition 41 (12), 3592–3599.Zanghi, H., Picard, F., Miele, V., Ambroise, C., 2010. Strategies for online inference of network mixture.Annals of Applied Statistics, to appear.