Researchain Logo Researchain
  • Decentralized Journals

    A

    Archives
  • Avatar
    Welcome to Researchain!
    Feedback Center
Decentralized Journals
A
Archives Updated
Archive Your Research
Computer Science Data Structures and Algorithms

Approximately counting independent sets of a given size in bounded-degree graphs

Ewan Davies,  Will Perkins

Abstract
We determine the computational complexity of approximately counting and sampling independent sets of a given size in bounded-degree graphs. That is, we identify a critical density \alpha_c(\Delta) and provide (i) for \alpha < \alpha_c(\Delta) randomized polynomial-time algorithms for approximately sampling and counting independent sets of given size at most \alpha n in n-vertex graphs of maximum degree \Delta; and (ii) a proof that unless NP=RP, no such algorithms exist for \alpha>\alpha_c(\Delta). The critical density is the occupancy fraction of hard core model on the clique K_{\Delta+1} at the uniqueness threshold on the infinite \Delta-regular tree, giving \alpha_c(\Delta)\sim\frac{e}{1+e}\frac{1}{\Delta} as \Delta\to\infty.
Full PDF

aa r X i v : . [ c s . D S ] F e b APPROXIMATELY COUNTING INDEPENDENT SETS OFA GIVEN SIZE IN BOUNDED-DEGREE GRAPHS

EWAN DAVIES AND WILL PERKINS

Abstract.

We determine the computational complexity of approxi-mately counting and sampling independent sets of a given size in bounded-degree graphs. That is, we identify a critical density α c (∆) and provide(i) for α < α c (∆) randomized polynomial-time algorithms for approx-imately sampling and counting independent sets of given size at most αn in n -vertex graphs of maximum degree ∆; and (ii) a proof thatunless NP=RP, no such algorithms exist for α > α c (∆). The criti-cal density is the occupancy fraction of hard core model on the clique K ∆+1 at the uniqueness threshold on the infinite ∆-regular tree, giving α c (∆) ∼ e e as ∆ → ∞ . Introduction

Counting and sampling independent sets in graphs are fundamental com-putational problems arising in several fields including algorithms, statisticalphysics, and combinatorics. Given a graph G , let I ( G ) denote the set ofindependent sets of G . The independence polynomial of G is Z G ( λ ) = X I ∈I ( G ) λ | I | = X k ≥ i k ( G ) λ k , where i k ( G ) is the number of independent sets of size k in G . The inde-pendence polynomial also arises as the partition function of the hard-coremodel from statistical physics.With G and λ as inputs, exact computation of Z G ( λ ) is Z G ( λ ) has been a major topic in recenttheoretical computer science research. There is a detailed understanding ofthe complexity of approximating Z G ( λ ) for the class of graphs of maximumdegree ∆, in particular showing that there is a computational threshold whichcoincides with a certain probabilistic phase transition as one varies the valueof λ .The hard-core model on G at fugacity λ is the probability distribution on I ( G ) defined by µ G,λ ( I ) = λ | I | Z G ( λ ) . Date : February 10, 2021.WP supported in part by NSF grants DMS-1847451 and CCF-1934915.

Defined on a lattice like Z d (through an appropriate limiting procedure),this is a simple model of a gas (the hard-core lattice gas) and it exhibits anorder/disorder phase transition as λ changes. The hard-core model can alsobe defined on the infinite ∆-regular tree (the Bethe lattice ). Kelly [20] deter-mined the critical threshold for uniqueness of the infinite volume measureon the tree, namely(1) λ c (∆) = (∆ − ∆ − (∆ − ∆ . This value of λ also marks a computational threshold for the complexity ofapproximating Z G ( λ ) on graphs of maximum degree ∆. One can approx-imate Z G ( λ ) up to a relative error of ε in time polynomial in n and 1 /ε with several different methods, provided G is of maximum degree ∆ and λ < λ c (∆). The first such algorithm is based on correlation decay on treesand is due to Weitz [30], but recently alternative algorithms based on polyno-mial interpolation [3, 24, 25] and Markov chains [2, 7, 6] for this problem havealso been given. Conversely, for λ > λ c (∆) a result of Sly and Sun [28] andGalanis, ˇStefankoviˇc, and Vigoda [16] (following Sly [27]) states that unlessNP=RP there is no polynomial-time algorithm for approximating Z G ( λ ) ongraphs of maximum degree ∆. Counting and sampling are closely related,and by standard reduction techniques the same computational thresholdholds for the problem of approximately sampling independent sets from thehard-core distribution.The hard-core model is an example of the grand canonical ensemble fromstatistical physics, where one studies physical systems that can freely ex-change particles and energy with a reservoir. Closely related is the canoni-cal ensemble , where one removes the reservoir and considers a system with afixed number of particles. In the context of independent sets in graphs, thiscorresponds to the uniform distribution on independent sets of some fixedsize k . Here the number i k ( G ) of independent sets of size k in G plays therole of the partition function. In this paper we answer affirmatively the nat-ural question of whether there is a similar complexity phase transition forthe problem of approximating i k ( G ), and the related problem of samplingindependent sets of size k approximately uniformly. Analogous to the criti-cal fugacity in the hard-core model, we identify a critical density α c (∆), andfor α < α c (∆) we give a fully polynomial-time randomized approximationscheme (FPRAS, defined below) for counting independent sets of size k in n -vertex graphs of maximum degree ∆, where 0 ≤ k ≤ αn . We also showthat unless NP=RP there is no such algorithm for α > α c (∆).In statistical physics the grand canonical ensemble and the canonical en-semble are known to be equivalent in some respects under certain conditions,and the present authors, Jenssen, and Roberts [12] used this idea to givea tight upper bound on i k ( G ) for large k in large ∆-regular graphs G (seealso [10] for the case of small k ). Here, the main idea in our proofs is also PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 3 to exploit the equivalence of ensembles. For algorithms at subcritical den-sities we approximately sample independent sets from the hard-core modeland show that with sufficiently high probability we get an independent setof the desired size, distributed approximately uniformly. For hardness atsupercritical densities we construct an auxiliary graph G ′ such that i k ( G ′ )is approximately proportional to Z G ( λ ) for some λ > λ c (∆), and hence ishard to approximate. Our algorithm for subcritical densities is new, and inthe sense of permitting higher densities it outperforms previous algorithmsfor this problem based on Markov chains [5, 1], and an algorithm implicitin [10] based on the cluster expansion.A pleasant feature of our methods is the incorporation of several advancesfrom recent research on related topics. From the geometry of polynomials weuse a state-of-the-art zero-free region for Z G ( λ ) due to Peters and Regts [25]and a central limit theorem of Michelen and Sahasrabudhe [23, 22] (thoughan older result of Lebowitz, Pittel, Ruelle and Speer [21] would also suffice),and we also apply the very recent development that a natural Markov chainfor sampling from the hard-core model at subcritical fugacities (the Glauberdynamics) mixes rapidly [1, 6]. Finally, our results also show a connectionbetween these algorithmic and complexity-theoretic problems and extremalcombinatorics problems for bounded-degree graphs [9, 12, 10], see also thesurvey [31].1.1. Preliminaries.

Given an error parameter ε and real numbers z , ˆ z ,we say that ˆ z is a relative ε -approximation to z if e − ε ≤ ˆ z/z ≤ e ε . A fully polynomial-time randomized approximation scheme or FPRAS for acounting problem is a randomized algorithm that with probability at least3 / ε -approximation to the solution of the problem intime polynomial in the size of the input and 1 /ε . If the algorithm is deter-ministic (i.e. succeeds with probability 1) then it is a fully polynomial-timeapproximation scheme ( FPTAS ). An ε -approximate sampling algorithm fora probability distribution µ outputs a random sample from a distribution ˆ µ such that the total variation distance k µ − ˆ µ k T V ≤ ε , and an efficient sam-pling scheme is, for all ε > ε -approximate sampling algorithm whichruns in time polynomial in the size of the input and log(1 /ε ). Note thatapproximate sampling schemes whose running times are polynomial in 1 /ε or in log(1 /ε ) are common in the literature, but we adopt the stronger def-inition for this paper. The inputs to our algorithms are graphs, and inputsize corresponds to the number of vertices of the graph.An independent set in a graph G = ( V, E ) is a subset I ⊂ V such thatno edge of E is contained in I . The density of such an independent set I is | I | / | V | , and it will be convenient for us to parametrize independentsets by their density instead of their size. We write I ( G ) for the set of allindependent sets in G , I k ( G ) for the set of independent sets of size k in G , and i k ( G ) = |I k ( G ) | for the number of such sets. Recall the hard-coredistribution µ G,λ on I ( G ) is given by µ G,λ ( I ) = λ | I | /Z G ( λ ). We also define E. DAVIES AND W. PERKINS the occupancy fraction α G ( λ ) of the hard-core model on G at fugacity λ tobe the expected density of a random independent set drawn according to µ G,λ . Let G ∆ be the set of graphs of maximum degree ∆.The critical density that we show constitutes a computational thresholdfor the problems of counting and sampling independent sets of a given sizein graphs of maximum degree ∆ is α c (∆) = λ c (∆)1 + (∆ + 1) λ c (∆) = (∆ − ∆ − (∆ − ∆ + (∆ + 1)(∆ − ∆ − , with λ c the critical fugacity as in (1). This may seem unexpected at firstsight, but has a natural interpretation. The threshold is in fact the quantity α K ∆+1 ( λ c (∆)), the occupancy fraction of the clique on ∆ + 1 vertices at thecritical fugacity λ c (∆). This is a natural threshold because the occupancyfraction is a monotone increasing function of λ , and the clique on ∆ + 1vertices has the minimum occupancy fraction over all graphs of maximumdegree ∆. Thus, for any G ∈ G ∆ , the value of λ which makes α G ( λ ) > α c (∆)must be greater than λ c (∆). Conversely, if α < α c (∆) then for every graph G ∈ G ∆ there is some λ < λ c (∆) such that α G ( λ ) = α .1.2. Our results.

We are now ready to state our main result.

Theorem 1. (a)

For every α < α c (∆) there is an FPRAS for i ⌊ αn ⌋ ( G ) and an effi-cient sampling scheme for the uniform distribution on I ⌊ αn ⌋ ( G ) for n -vertex graphs G of maximum degree ∆ . (b) Unless

NP=RP , for every α ∈ ( α c (∆) , / there is no FPRAS for i ⌊ αn ⌋ ( G ) for n -vertex, ∆ -regular graphs G . The assumption NP =RP, which is that polynomial-time algorithms usingrandomness cannot solve all problems in NP, is standard in computationalcomplexity theory. Indeed, this assumption is used in [27, 28, 16] to showhardness of approximation for Z G ( λ ) on regular graphs at supercritical fu-gacities, which we apply directly. The upper bound of 1 / α in (b) isrequired since in a regular graph (of degree ≥

1) there are no independentsets of density greater than 1 / / α ∈ ( α c (∆) , k in n -vertex graphs of maximum degree ∆ mixes rapidly when k < n/ (2∆+2), andrecently this was slightly improved to k < n/ (2∆) via the method of high-dimensional expanders by Alev and Lau [1] (who also gave an improvedbound in terms of the smallest eigenvalue of the adjacency matrix of G ).The fast mixing of this Markov chain provides a randomized algorithm forapproximate sampling and an FPRAS for approximate counting for this PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 5 range of k . Implicit in the work of the authors and Jenssen [10] is analternative method based on the cluster expansion that yields an FPTASfor i k ( G ) when k < e − n/ (∆ + 1), and although we did not try to optimizethe constant it seems unlikely that without significant extension the clusterexpansion approach could yield a sharp result. Considering asymptotics as∆ → ∞ , these previous algorithms work for densities up to ( c + o (1)) / ∆ withthe constant c being 1 / e − ≈ .

007 respectively. Here, our algorithmswork up for densities α satisfying α < α c (∆) = (1 + o (1)) e e , where the constant e/ (1 + e ) is approximately 0 . Triangle-free graphs.

As an additional application of our techniqueswe find an approximate computational threshold for the class of triangle-freegraphs.

Theorem 2.

For every δ > there is ∆ large enough so that the followingis true. (a) For ∆ ≥ ∆ and α < − δ ∆ there is an FPRAS and efficient samplingscheme for i ⌊ αn ⌋ ( G ) for the class of triangle-free graphs of maximumdegree ∆ . (b) For ∆ ≥ ∆ and α ∈ (cid:0) δ ∆ , / (cid:1) there is no FPRAS for i ⌊ αn ⌋ ( G ) forthe class of triangle-free graphs of maximum degree ∆ . The proof of this theorem uses a result on the occupancy fraction of triangle-free graphs from [11].1.4.

Related work.

Counting independent sets of a specified size has arisenin various places as a natural fixed-parameter version of counting indepen-dent sets, and is equivalent to counting cliques of a specified size in thecomplement graph. Exact computation of i k ( G ) in an n -vertex graph H istrivially possible in time O ( k n k ), though improvements can be made viafast matrix multiplication algorithms (see e.g. [15]). Another branch of re-search concerns the complexity (in both time and number of queries to thegraph data structure) of counting and approximately counting cliques. Forexample, in [14] the authors gave a randomized approximation algorithm forapproximating the number of cliques of size k . Results of this kind performpoorly in our setting, which is equivalent to counting cliques in the com-plement of bounded-degree graphs, because such graphs are very dense. Inparticular, the main result of [14] has expected running time Ω(( nk/e ) k ) inour setting.With a focus on bounded-degree graphs and connections to statisticalphysics, our work is closer in spirit to that of Curticapean, Dell, Fomin,Goldberg, and Lapinskas [8]. There, the authors consider the problem ofcounting independent sets of size k in bipartite graphs from the perspective E. DAVIES AND W. PERKINS of parametrized complexity. They give algorithms for exact computationand approximation of i k ( G ) in bipartite graphs (of bounded degree andotherwise), including a fixed parameter tractable randomized approximationscheme, though their running times are exponential in k . We note that thecomplexity of approximately counting the total number of independent setsin bipartite graphs (a problem known as Questions and future directions.

For the hard-core model, the algo-rithm of Weitz [30] gives a deterministic approximation algorithm (FPTAS)for Z G ( λ ) for λ < λ c (∆). The approach of Barvinok along with results ofPatel and Regts and Peters and Regts give another FPTAS for the samerange of parameters [3, 24, 25]. Our algorithm for approximating the num-ber of independent sets of a given size uses randomness, but we conjecturethat there is a deterministic algorithm that works for the same range ofparameters. (The cluster expansion approach of [10] gives an FPTAS butonly for smaller values of α ). Conjecture 1.

There is an FPTAS for i ⌊ αn ⌋ ( G ) for G ∈ G ∆ and all α <α c (∆) . The Markov chain analyzed in [5, 1] is the ‘down/up’ Markov chain: start-ing from an independent set I t ∈ I k ( G ) at step t , pick a uniformly randomvertex v ∈ I t and a uniformly random vertex w ∈ V . Let I ′ = ( I t \ v ) ∪ w .If I ′ ∈ I k ( G ), let I t +1 = I ′ ; if not, let I t +1 = I t . Conjecture 2.

The down/up Markov chain for sampling from I ⌊ αn ⌋ ( G ) mixes rapidly for α < α c (∆) and all G ∈ G ∆ . One of the steps of our proof leads to a natural probabilistic conjectureconcerning the hard-core model in bounded degree graphs.

Conjecture 3.

Suppose G is a graph on n vertices of maximum degree ∆ .Then if λ < λ c (∆) and k = ⌊ E G,λ | I |⌋ , we have P G,λ [ | I | = k ] = Ω( n − / ) , where the implied constant only depends on ∆ and λ and the expectation andprobability are with respect to the hard-core model on G at fugacity λ . Lemma 5 below gives the weaker bound Ω( n − log − n ). A stronger con-jecture would be that a local central limit theorem for | I | holds whenever λ < λ c (∆).Finally, our proofs of Theorems 1 and 2 show a close connection betweenthe computational threshold for sampling independent sets of a given size inbounded-degree graphs and the extremal combinatorics problem of minimiz-ing the occupancy fraction in the hard-core model over a class of bounded-degree graphs. We expect that a rigorous connection between the two prob-lems can be proved. PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 7 Algorithms

In this section, we fix ∆ ≥ α < α c (∆). We will give an algorithmthat, for G ∈ G ∆ on n vertices and k ≤ αn , returns an ε -approximateuniform sample from I k ( G ) and runs in time polynomial in n and log(1 /ε );this proves the sampling part of Theorem 1 (a). We then use this algorithmto approximate i k ( G ) using a standard simulated annealing process to provethe approximate counting part of Theorem 1 (a).Given λ ≥

0, let I be a random independent set from the hard-core modelon G at fugacity λ . We will write P G,λ for probabilities over the hard-coremeasure µ G,λ , so e.g. P G,λ ( | I | = k ) is the probability that I is of size exactly k . Often we will suppress the dependence on G .A key tool that we use for probabilistic analysis and to approximatelysample from µ G,λ is the

Glauber dynamics . This is a Markov chain withstate space I ( G ) and stationary distribution µ G,λ . Though the algorithm ofWeitz [30] was the first to give an efficient approximate sampling algorithmfor µ G,λ for λ < λ c (∆) and all G ∈ G ∆ , a randomized algorithm withbetter running time now follows from recent results showing that the Glauberdynamics mix rapidly for this range of parameters [2, 7, 6]. The mixing time T mix ( M , ε ) of a Markov chain M is the number of steps from the worst-caseinitial state I for the resulting state to have a distribution within totalvariation distance ε of the stationary distribution. We will use the followingresult of Chen, Liu, and Vigoda [6], and the sampling algorithm that itimplies. Theorem 3 ([6]) . Given ∆ ≥ and ξ ∈ (0 , λ c (∆)) , there exists C > suchthat the following holds. For all ≤ λ < λ c (∆) − ξ and graphs G ∈ G ∆ on n vertices, the mixing time T mix ( M , ε ) of the Glauber dynamics M for thehard-core model on G with fugacity λ is at most Cn log( n/ε ) . This impliesan ε -approximate sampling algorithm for µ G,λ for G ∈ G ∆ that runs in time O ( n log n log( n/ε )) . The sampling algorithm follows from the mixing time bound; the extrafactor log n is the cost of implementing one step of Glauber dynamics (whichrequires reading O (log n ) random bits to sample a vertex uniformly). Notethat the implicit constant in the running time depends on how close λ is to λ c (∆), but in applications of this theorem we will have λ ≤ λ c (∆) − ξ forsome fixed ξ >

0, so that the implicit constant depends only on ξ , which inturn depends on α .2.1. Approximate sampling.

The following algorithm uses Theorem 3and a binary search on values of λ to generate samples from I k ( G ). Themain results in this section are a proof that the samples are distributedapproximately uniformly and a bound on the running time. E. DAVIES AND W. PERKINS

Algorithm: Sample- k • INPUT: α < α c ; ε > G ∈ G ∆ of size n ; integer k ≤ αn . • OUTPUT: I ∈ I k ( G ) with distribution within ε total variation dis-tance of the uniform distribution of I k ( G ).(1) Let λ ∗ = α − α (∆+1) .(2) For t = 0 , . . . , ⌊ λ ∗ n ⌋ , let λ t = t/ (2 n ).(3) Let Λ = { λ t : t = 0 , . . . , ⌊ λ ∗ n ⌋} .(4) FOR i = 1 , . . . , C log n ,(a) Let λ be a median of the set Λ i − .(b) With N = C ′ n log (cid:0) log nε (cid:1) , take N independent samples I , . . . , I N from a distribution ˆ µ λ on I ( G ).(c) Let κ = N P Nj =1 | I j | .(d) If | κ − k | ≤ / j ∈ { , . . . , N } so that | I j | = k ,then output I j for the smallest such j and HALT.(e) If κ ≤ k , let Λ i = { λ ′ ∈ Λ i − : λ ′ > λ } . If instead κ > k , letΛ i = { λ ′ ∈ Λ i − : λ ′ < λ } .(5) If no independent set of size k has been obtained by the end of theFOR loop (or if Λ j = ∅ at any step), use a greedy algorithm andoutput an arbitrary I ∈ I k ( G ). Theorem 4.

Let C be the constant in line (4) of the algorithm. If thedistributions ˆ µ λ are each within total variation distance ε/ (2 CN log n ) of µ G,λ , the output distribution of Sample- k is within total variation distance ε of the uniform distribution of I k ( G ) . The running time of Sample- k is O ( N log n · T ( n, ε )) where T ( n, ε ) is the running time required to produce asample from ˆ µ λ satisfying the above guarantee. The sampling part of Theorem 1 follows immediately from Theorem 4since by Theorem 3 we can obtain ε/ (2 CN log n )-approximate samples from µ G,λ in time O ( n log n log( n log n · N/ε )). Thus, the total running time ofSample- k with this guarantee on ˆ µ λ is O ( N · n log n · log( nN/ε )) ≤ n log n · polylog (cid:0) log nε (cid:1) . Before we prove Theorem 4, we collect a number of preliminary resultsthat we will use. The first is a bound on the probability of getting anindependent set of size close to the mean from the hard-core model when λ <λ c (∆). We use the notation nα G ( λ ) for the expected size of an independentset from the hard-core model on G at fugacity λ to avoid ambiguities. Lemma 5.

For ∆ ≥ and α < α c (∆) , there is a unique λ ∗ < λ c (∆) sothat α K ∆+1 ( λ ∗ ) = α , and the following holds. For any G ∈ G ∆ on n verticesand any ≤ k ≤ αn , there exists an integer t ∈ { , . . . , ⌊ λ ∗ n ⌋} so that (2) (cid:12)(cid:12) nα G ( t/ (2 n )) − k (cid:12)(cid:12) ≤ / . PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 9

Moreover, if t satisfies (2) then µ G,t/ (2 n ) ( I k ( G )) = Ω (cid:18) n log n (cid:19) . To prove this lemma we need several more results. The first is an ex-tremal bound on α G ( λ ) for G ∈ G ∆ . The statement of the theorem followsfrom a stronger property proved by Cutler and Radcliffe in [9]; see [12] fordiscussion. Theorem 6 ([9]) . For all G ∈ G ∆ and all λ ≥ , α G ( λ ) ≥ α K ∆+1 ( λ ) = λ λ (∆ + 1) . We next rely on a zero-free region for Z G ( λ ) due to Peters and Regts [25],so that we can apply the subsequent central limit theorem. Theorem 7 ([25]) . Let ∆ ≥ and ξ ∈ (0 , λ c (∆)) . Then there exists δ > such that for every G ∈ G ∆ the polynomial Z G has no roots in the complexplane that lie within distance δ of the real interval [0 , λ c (∆) − ξ ) . The probability generating function of a discrete random variable X dis-tributed on the non-negative integers is the polynomial in z given by f ( z ) = P j ≥ P ( X = j ) z j , and the above result shows that at subcritical fugacitythe probability generating function of | I | has no zeros close to 1 in C . Thislets us use the following result of Michelen and Sahasrabudhe [22]. Theorem 8 ([22]) . For n ≥ let X n be a random variable taking valuesin { , . . . , n } with mean µ n , standard deviation σ n , and probability gener-ating function f n . If f n has no roots within distance δ n of in C , and σ n δ n / log n → ∞ , then ( X n − µ n ) /σ n tends to a standard normal in distri-bution. The final tools we need are simple bounds on the variance of the size ofan independent set from the hard-core model.

Lemma 9.

Let G be a graph on n vertices and let I be a random independentset drawn from the hard-core model on G at fugacity λ . Then, if M is thesize of a largest independent set in G we have λ (1 + λ ) M ≤ var( | I | ) ≤ n λ λ . If G has maximum degree ∆ then this applies with M = n/ (∆ + 1) .Proof. For the upper bound note that | I | is the sum of the indicator randomvariables X v that the vertex v ∈ V ( G ) is in I . Then because P ( X v =1) ≤ λ/ (1 + λ ) for all v , from the Cauchy–Schwarz inequality in the formcov( X u , X v ) ≤ var( X u ) var( X v ) we obtainvar( | I | ) = X u ∈ V ( G ) X v ∈ V ( G ) cov( X u , X v ) ≤ n λ λ . For the lower bound, let J be some fixed independent set in G of maximumsize M . Now write X = | I | , and let K = I \ J . By the law of total variance,var( X ) = E [var( X | K )] + var( E [ X | K ]) ≥ E [var( X | K )] . But we have X = | K | + | I ∩ J | , and conditioned on K the set | I ∩ J | isdistributed according to the hard-core model on J \ N G ( K ), the subset of J uncovered by K . Since J is independent, this is a sum of at most | J | inde-pendent, identically distributed Bernoulli random variables with probability λ/ (1 + λ ).Now, writing U = | J \ N G ( K ) | for the number variables in the sum wehave var( X ) ≥ E [var( X | K )] = λ (1 + λ ) E U . A vertex u ∈ J is uncovered by K precisely when N ( u ) ∩ K = ∅ . Then bysuccessive conditioning and the maximum degree condition, the probabilitythat u is uncovered by K is at least (1+ λ ) − ∆ . This means E U ≥ | J | (1+ λ ) − ∆ and hence var( X ) ≥ λ (1 + λ ) M .

The final assertion follows from the fact that any n -vertex graph of maximumdegree ∆ contains an independent set of size at least n/ (∆ + 1), which iseasy to prove by analyzing a greedy algorithm. (cid:3) Proof of Lemma 5.

A standard calculation gives ∂∂λ α G ( λ ) = 1 n ∂∂λ λZ ′ G ( λ ) Z G ( λ ) = 1 nλ var( | I | ) , and so Lemma 9 gives that 0 < α ′ G ( λ ) ≤ n for all λ > λ ∗ < λ c (∆) be the solution to the equation α K ∆+1 ( λ ∗ ) = α .This means λ ∗ = α − α (∆ + 1) , as defined in Sample- k . The fact that λ ∗ < λ c (∆) follows from the factthat α < α c (∆) = α K ∆+1 ( λ c (∆)), and that occupancy fractions are strictlyincreasing. Then using Theorem 6 we have that(3) α G ( λ ∗ ) ≥ α K ∆+1 ( λ ∗ ) = α , and so there exists λ ∈ (0 , λ ∗ ] such that nα G ( λ ) = k . Using the upper boundon α ′ G ( λ ), we see that as λ increases over an interval of length 1 / (2 n ), thefunction nα G ( λ ) can increase by at most 1 /

2. Hence, there is at least oneinteger t ∈ { , . . . , ⌊ λ ∗ n ⌋} such that | nα G ( t/ (2 n )) − k | ≤ / | I | and rapid mixing of the Glauber dynamics. There is a close connection PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 11 between zeros of the probability generating function of | I | and the zeros ofthe partition function itself. The probability generating function of | I | is f ( z ) = X j ≥ P λ ( | I | = j ) z j = X j ≥ i j ( G ) λ j z j Z G ( λ ) = Z G ( λz ) Z G ( λ ) . Then for λ such that Z G ( λ ) = 0, z is a root of f if and only if zλ is aroot of Z G ( λ ). By our assumptions on t , when λ = t/ (2 n ) Theorem 7gives the existence of δ > G ∈ G ∆ there are no complexzeros of f within distance δ/λ of 1. This is because Theorem 7 means that Z G ( zλ ) = 0 implies | zλ − λ | ≥ δ . The condition of Theorem 8 which statesthat σ n δ n / log n → ∞ is met because λ < λ c (∆) ≤ σ n δ n ≥ s λ (1 + λ ) n ∆ + 1 · δλ ≥ Ω (cid:16)p n/λ (cid:17) > ω (log n ) . Now, given λ = t/ (2 n ) such that (2) holds, we have n λ λ ≥ nα G ( λ ) ≥ k − / ≥ / , and so λ ≥ Ω(1 /n ). Together with the lower bound on the standard devia-tion of the size of the independent set drawn according to µ G,λ of Ω( √ λn )from Lemma 9, condition (2) thus implies that k is within some constantnumber r > nα G ( λ ). The centrallimit theorem and standard properties of the normal distribution mean thatthere are constants ρ > r ) and n suchthat for all n ≥ n , with probability at least ρ , | I | is at least r standarddeviations below the mean, and similarly with probability at least ρ it is atleast r standard deviations above the mean. So we have P G,λ ( | I | ≥ k ) ≥ ρ and P G,λ ( | I | ≤ k ) ≥ ρ .The transition probabilities when we are at state I in the Glauber dynam-ics are given by the following random experiment. Choose a vertex v ∈ V ( G )uniformly at random and let I ′ = ( I ∪ { v } with probability λ/ (1 + λ ) , I \ { v } with probability 1 / (1 + λ ) .Now if I ′ is independent in G move to state I ′ , otherwise stay in state I .This means that the sequence of sizes of set visited must take consecutiveinteger values. By Theorem 3, there is a constant C ′′ such that from anarbitrary starting state, in C ′′ n log n steps the distribution π of the currentstate is within total variation distance ρ/ k , with probabilityat least ρ/ C ′′ n log n steps is an independent set ofsize at least k . (ii) Starting from an independent set of size at least k , with probabilityat least ρ/ C ′′ n log n steps is an independent set ofsize at most k .Consider starting from an initial state distributed according to µ G,λ . Thenevery subsequent state is also distributed according to µ G,λ , and the abovefacts mean that for any sequence of C ′′ n log n consecutive steps, with proba-bility at least ρ/ k . Recalling that λ = t/ (2 n ),this immediately implies that µ G,t/ (2 n ) ( I k ( G )) ≥ ρ C ′′ n log n , as required. (cid:3) Now we prove Theorem 4.

Proof.

We first prove the theorem under the assumption that each ˆ µ λ isexactly the hard-core measure µ G,λ , taking note of how many times wesample from any ˆ µ λ .We say a failure occurs at step i in the FOR loop if either of the followingoccur:(1) | nα G ( λ ) − κ | > / | nα G ( λ ) − k | ≤ / k in step i .We show that the probability that a failure occurs at any time during thealgorithm is at most ε/

2. By a union bound, it is enough to show that theprobability of either type of failure at a given step i is at most ε C log n .Consider an arbitrary step i with its value of λ . To bound the quantity P ( | nα G ( λ ) − κ | > / κ is the mean of N independent samplesfrom ˆ µ λ , which we currently assume to be µ G,λ . Then we have E κ = nα G ( λ )and Hoeffding’s inequality gives P ( | nα G ( λ ) − κ | > / ≤ e − N/ (8 n ) , so for this to be at most ε/ (4 C log n ) we need only N ≥ Ω (cid:16) n log (cid:0) log nε (cid:1)(cid:17) . To bound the probability that the current step involves λ such that | nα G ( λ ) − k | ≤ /

2, but we fail to get a set of size k in the N samples,observe that we have N independent trials for getting a set of size k , andeach trial succeeds with probability p ≥ c/ ( n log n ) by Lemma 5. Then theprobability we see no successful trials is (cid:18) − cn log n (cid:19) N , which is at most ε/ (4 C log n ) for N ≥ Ω (cid:16) n log n · log (cid:0) log nε (cid:1)(cid:17) . PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 13

Thus, we can take N = Θ (cid:0) n log (cid:0) log nε (cid:1)(cid:1) , as in line (5) of Sample- k .Next we show that in the event that no failure occurs during the running ofthe algorithm, the algorithm outputs an independent set I with distributionwithin ε/ I k ( G ).We first observe that if no failure occurs, the algorithm at some pointreaches a value of λ so that | nα G ( λ ) − k | ≤ /

2. This is a simple consequenceof Lemma 5, which means there exists some t with this property, and thebinary search structure of the algorithm. In particular, in each iteration ofthe FOR loop, at line (e) the size of the set Λ i being searched goes down by(at least) half. Conditioned on no failures, the search also proceeds in thecorrect half of λ i because we search the upper half only when κ < k − / nα G ( λ ) ≤ κ + 1 / < k and henceusing a larger value of λ must bring nα G ( λ ) closer to k . The case κ > k +1 / λ such that | nα G ( λ ) − k | ≤ / µ G,λ conditioned on getting a set of size exactly k is preciselythe uniform distribution on I k ( G ), hence if the algorithm outputs an inde-pendent set of size k during the FOR loop, its distribution is exactly uniformdistribution on I k ( G ). Thus, under the assumption that each ˆ µ λ is precisely µ G,λ we have shown that with probability at least 1 − ε/ I k ( G ) is output during the FORloop.We do not have access to an efficient exact sampler for µ G,λ , so we make dowith the approximate sampler from Theorem 3. One interpretation of totalvariation distance is that when each ˆ µ λ has total variation distance at most ξ from µ G,λ , there is a coupling between ˆ µ λ and µ G,λ such that the probabilitythey disagree is at most ξ . Then to prove Theorem 4 we consider a thirdfailure condition: that during any of the calls to a sampling algorithm for anyˆ µ λ the output differs from what would have been given by µ G,λ under thiscoupling. Since we make at most CN log n calls to such sampling algorithms,provided ξ ≤ ε/ (2 CN log n ) the probability of any failure of this kind is atmost ε/

2. Together with the above proof for samples distributed exactlyaccording to µ G,λ which successfully returns uniform samples from I k ( G )with probability 1 − ε/

2, we have now shown the existence of a sampler thatwith probability 1 − ε returns uniform samples from I k ( G ), and makes atmost CN log n calls to a ε/ (2 CN log n )-approximate sampler for µ G,λ (atvarious values of λ ). Interpreting this in terms of total variation distance,this means we have an ε -approximate sampler for the uniform distributionon I k ( G ) with running time O ( N log n · T ( n, ε )). (cid:3) Approximate counting via sampling.

Given a graph G = ( V, E )on n vertices and j ≥

0, let f j ( G ) = ( j + 1) i j +1 ( G ) /i j ( G ). This f j ( G ) hasan interpretation as the expected free volume over a uniform random inde-pendent set J ∈ I j ( G ), that is, f j = E | V \ ( J ∪ N ( J )) | . This holds becauseeach vertex in V \ ( J ∪ N ( J )) can be added to J to make an independent set of size j + 1, and each such set is counted j + 1 times in this way. Thenby a simple telescoping product we have(4) i k ( G ) = k − Y j =0 f j ( G ) j + 1 , and hence if for 0 ≤ j ≤ k − ε/k approxima-tion to f j in time polynomial in n and 1 /ε then we can obtain a relative ε -approximation to i k ( G ) in time polynomial in n and 1 /ε . By the defini-tion of f j as an expectation over a uniform random independent set of size j , we can use an efficient sampling scheme for this distribution to approxi-mate f j , which is provided by Theorem 4. That is, by repeatedly samplingindependent sets of size j approximately uniformly and recording the freevolume we can approximate the expected free volume f j ( G ), and hence thecorresponding term of the product in (4). Doing this for all 0 ≤ j ≤ k − i k ( G ). This scheme is an example of simulated annealing , which can be used as a general technique for obtain-ing approximation algorithms from approximate sampling algorithms. Formore details, see e.g. [19, 26]. Here the integer j is playing the role of inversetemperature, and we approximate i k ( G ) by estimating f j ( G ) (by samplingfrom I j ( G )) with the cooling schedule j = 0 , , . . . , k − ≤ j ≤ k − − δ ′ returns a relative ε/k -approximationˆ t j to f j ( G ) / ( j + 1) in time T ′ . Then (4) implies that with probability atleast 1 − kδ , the product ˆ ı k = Q k − j =0 ˆ t j is a relative ε -approximation to i k ( G ),and this takes time kT ′ to compute. For the FPRAS in Theorem 1, ittherefore suffices to design the hypothetical algorithm with δ ′ = 1 / (4 k ) and T ′ polynomial in n and 1 /ε .First, suppose that we have access to an exactly uniform sampler for I j ( G ) for 0 ≤ j ≤ k −

1, but impose the smaller failure probability boundof δ ′ /

2. Then, for each j , let ˆ t j be the sample mean of m computationsof | V \ ( J ∪ N ( J )) | / ( j + 1) where J is a uniform random independent setof size j . We note that as a random variable | V \ ( J ∪ N ( J )) | / ( j + 1) hasa range of at most j ∆ / ( j + 1) in a graph of maximum degree ∆ because0 ≤ | N ( J ) | ≤ j ∆, and for j ≤ k − k ≤ αn < α c (∆) n < e e n ∆ , we have | V \ ( J ∪ N ( J )) | j + 1 ≥ n − j (∆ + 1) j + 1 ≥ ∆ e − . Let S j be the mean of m samples of | V \ ( J ∪ N ( J )) | / ( j + 1). Then, usingthat for ε ′ ≤ | S j − µ | ≤ ε ′ µ/ S j to be a relative PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 15 ε ′ -approximation to µ , by Hoeffding’s inequality, m ≥ Ω( ε − k log(1 /δ ′ )) = Ω( ε − k log k )samples are sufficient to obtain the required approximation accuracy ε ′ withthe required success probability 1 − δ ′ /

2. Since we do not have an exactsampler, we use the approximate sampler obtained in this section with to-tal variation distance δ ′ /

2. Using the coupling between the exact and theapproximate sampler that we used in the proof of Theorem 4, this sufficesto obtain the required sampling accuracy with failure probability at most δ ′ . Recalling that k ≤ n , it is now simple to check that the running time ofthe entire annealing scheme is polynomial in n and 1 /ε . This completes theproof of Theorem 1 (a).2.3. Triangle-free graphs.

Here we prove Theorem 2. We use the follow-ing lower bound on the occupancy fraction of triangle-free graphs.

Theorem 10 ([11]) . For every δ > , there is ∆ large enough so that forevery ∆ ≥ ∆ , and every triangle-free G ∈ G ∆ , α G ( λ c (∆) − / ∆ ) ≥ − δ ∆ . This statement follows from [11, Theorem 3] and some asymptotic analysisof the bound for λ = λ c (∆) − / ∆ as ∆ → ∞ . Now the algorithm forTheorem 2 is essentially the same as for Theorem 1, but since we assume thegraph G is triangle free we can use a stronger lower bound on the occupancyfraction than Theorem 6. Let δ > α < (1 − δ ) / ∆ as in Theorem 2.Then Theorem 10 means that for ∆ ≥ ∆ and any triangle-free graph G ∈G ∆ we have α G ( λ c (∆) − / ∆ ) ≥ − δ ∆ > α . But occupancy fractions are continuous and strictly increasing, so with λ ∗ = λ c (∆) − / ∆ there exists λ ∈ (0 , λ ∗ ] such that k = nα G ( λ ), as in the proofof Lemma 5 but permitting larger α . The analysis of the algorithm can thenproceed exactly as in the proofs of Lemma 5 and Theorem 1.3. Hardness

To prove hardness we will use the notion of an ‘approximation-preservingreduction’ from [13]. We reduce the problem of approximating the hard-corepartition function Z G ( λ ) on a ∆-regular graph G , which we recall is hard for λ > λ c (see [16, 28]), to the problem of approximating i k ( G ′ ) for ∆-regulargraph G ′ that can be constructed in time polynomial in the size of G . Inparticular, we show that it suffices to find an ε/ i k ( G ′ )in order to obtain an ε -approximation to Z G ( λ ).Let IS( α, ∆) be the problem of computing i ⌊ αn ⌋ ( G ) for a ∆-regular graph G on n vertices. Let HC( λ, ∆) be the problem of computing Z G ( λ ) for a∆-regular graph G . Theorem 11.

For every ∆ ≥ and α ∈ ( α c (∆) , / , there exists λ > λ c (∆) so that there is an approximation-preserving reduction from HC( λ, ∆) to IS( α, ∆) . Theorem 11 immediately implies the hardness part of Theorem 1 as theresults of [16, 28] show that there is no FPRAS for HC( λ, ∆) for any λ >λ c (∆) unless NP=RP. Proof of Theorem 11.

Fix ∆ ≥

3, and let α ∈ ( α c (∆) , /

2) be given. Wewill construct a ∆-regular graph H on n H vertices such that for some value λ ∈ ( λ c (∆) , ∞ ) we have(5) α H ( λ ) = α . Our reduction is then as follows: given a ∆-regular graph G on n verticesand ε >

0, let G ′ be the disjoint union of G with H ( r ) , the graph of r disjoint copies of H , with r = ⌈ C ∆ n /ε ⌉ for some absolute constant C . Let N = | V ( G ′ ) | = n + rn H . We will prove that(6) e − ε/ i k ( G ′ ) i k ( H ( r ) ) ≤ Z G ( λ ) ≤ e ε/ i k ( G ′ ) i k ( H ( r ) ) , where k = ⌊ αN ⌋ .Since G ′ can be constructed and i k ( H ( r ) ) computed in time polynomialin n , this provides the desired approximation-preserving reduction. Whatremains is to construct the graph H satisfying (5) and then to prove (6). Constructing H . The graph H = H a,b will consist of the union of a copiesof the complete bipartite graph K ∆ , ∆ and b copies of the clique K ∆+1 .Clearly H is ∆-regular. We can compute α H a,b ( λ ) = a λ (1+ λ ) ∆ − λ ) ∆ − + b (∆+1) λ λ a ∆ + b (∆ + 1) . Since the occupancy fraction of any graph is a strictly increasing function of λ , α K ∆+1 ( λ c (∆)) = α c (∆), and lim λ →∞ α K ∆ , ∆ ( λ ) = 1 /

2, we see that thereexist integers a, b ≥ λ > λ c (∆) so that α H a,b ( λ ) = α . A given pair ( a, b ) provides a suitable H a,b when α H a,b ( λ c (∆)) < α < lim λ →∞ α H a,b ( λ ) = a ∆ + b a ∆ + b (∆ + 1) , and hence it can be shown that for all ∆ ≥ , , , , , , ,

0) suffices for ( a, b ), and a suitable pair is easyto find efficiently. This provides us with the desired graph H . From hereon, fix these values a, b, λ and let n H = 2 a ∆ + b (∆ + 1). PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 17

Proving (6) . We now form G ′ by taking the union of G (a ∆-regular graphon n vertices) and r copies of H . Let N = n + rn H be the number of verticesof G ′ , and write k = ⌊ αN ⌋ . Let H ( r ) be the union of r copies of H . We canwrite: i k ( G ′ ) = n X j =0 i j ( G ) i k − j ( H ( r ) )= i k ( H ( r ) ) n X j =0 i j ( G ) i k − j ( H ( r ) ) i k ( H ( r ) ) . Now to prove (6) it suffices to show that for r ≥ C ∆ n /ε and 0 ≤ j ≤ n , wehave(7) e − ε/ λ j ≤ i k − j ( H ( r ) ) i k ( H ( r ) ) ≤ e ε/ λ j . We have the exact formula (for any 0 ≤ j ≤ k ) i k − j ( H ( r ) ) = Z H ( r ) ( λ ) λ k − j P H ( r ) ,λ ( | I | = k − j )and so i k − j ( H ( r ) ) i k ( H ( r ) ) = λ j P H ( r ) ,λ ( | I | = k − j ) P H ( r ) ,λ ( | I | = k ) , where P H ( r ) ,λ denotes probabilities with respect to an independent set I drawn according to the hard-core model on H ( r ) at fugacity λ . It is thenenough to show e − ε/ ≤ P H ( r ) ,λ ( | I | = k − j ) P H ( r ) ,λ ( | I | = k ) ≤ e ε/ . This will follow from a Local Central Limit theorem (e.g. [17]) since | I | isthe sum of r i.i.d. random variables and the fact that E H ( r ) ,λ | I | is close toboth k and k − j . The following theorem gives us what we need. Theorem 12 (Gnedenko [17]) . Let X , . . . , X r be i.i.d. integer valued ran-dom variables with mean µ and variance σ , and suppose that the supportof X includes two consecutive integers. Let S r = X + · · · + X r . Then P ( S r = k ) = 1 √ πrσ exp (cid:2) − ( k − nµ ) / (2 rσ ) (cid:3) + o ( r − / ) , with the error term o ( r − / ) uniform in k . This immediately implies that with µ and σ the mean and standarddeviation of the hard-core model on H at fugacity λ , P H ( r ) ,λ ( | I | = k − j ) P H ( r ) ,λ ( | I | = k ) = e − [ j − k − rµ ) j ] / (2 rσ ) + o ( e ( k − rµ ) / (2 rσ ) /r )1 + o ( e ( k − rµ ) / (2 rσ ) /r ) . It therefore suffices to show that for large enough r , namely r ≥ C ∆ n /ε ,we can make [ j − k − rµ ) j ] / (2 rσ ) small compared to ε and show that( k − rµ ) / (2 rσ ) is bounded above by some absolute constant. Note that µ = αn H , and by Lemma 9 we have for all ∆ ≥ α, λ, a, b made according to our conditions), σ ≥ λ (1 + λ ) ( a ∆ + b ) ≥ λ c (∆)(1 + λ c (∆)) ( a ∆ + b ) ≥ . . Since k = ⌊ αN ⌋ = ⌊ αn + rαn H ⌋ , we then have ( k − rµ ) ≤ α n < n , andhence ( k − rµ ) rσ ≤ C ′ ∆ n r , where C ′ is an absolute constant. Now since 0 ≤ j ≤ n we also have (cid:12)(cid:12)(cid:12)(cid:12) j − k − rµ ) j rσ (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ′ ∆ n r . This means that provided we take C to be a large enough absolute constantand r ≥ C ∆ n /ε , we have (6) as required. (cid:3) Triangle-free graphs.

The proof of hardness for triangle-free graphsis the same, but we replace K ∆+1 with a (constant-sized) random regulargraph in the construction. Bhatnagar, Sly, and Tetali [4] showed that thelocal distribution of the hard-core model on the random regular graph con-verges to that of the unique translation-invariant hard-core measure on theinfinite regular tree for a range of λ including λ = λ c (∆). This means thatif K is a random ∆-regular graph on n vertices and α T ∆ denotes the occu-pancy fraction of the unique translation-invariant hard-core measure on theinfinite ∆-regular tree (see [4, 11]) we have with probability 1 − o n (1), α G ( λ c (∆)) = α T ∆ ( λ c (∆)) + o n (1) = 1 + o n, ∆ (1)∆ , where o n (1) → n → ∞ and o n, ∆ (1) → n and ∆ tend toinfinity. Thus, for fixed δ ∈ (0 , n = n ( δ ) and ∆ = ∆ ( δ )such that with probability at least 1 − δ a random ∆-regular graph K on n vertices has α G ( λ c (∆)) ≤ (1 + δ ) / ∆. This means that in time bounded by afunction of δ an exhaustive search over ∆-regular graphs on n vertices mustyield a K with the property α K ( λ c (∆)) ≤ (1 + δ ) / ∆. Now we replace K ∆+1 with the random ∆-regular graph K in the proof above, which for ∆ ≥ ∆ allows us to work with any α ∈ ((1 + δ ) / ∆ , /

2) by the above argument. Tofinish the proof, we require that approximating Z G ( λ ) is hard for ∆-regular triangle-free graphs G when λ > λ c . This follows directly from the proof ofSly and Sun [28], as their gadget which shows hardness for ∆-regular graphscontains no triangles. Thus, we have the following analogue of Theorem 11,where we let IS ′ ( α, ∆) be the problem of computing i ⌊ αn ⌋ ( G ) for a ∆-regulartriangle-free graph G on n vertices. PPROXIMATELY COUNTING INDEPENDENT SETS OF A GIVEN SIZE 19

Theorem 13.

Given δ > there exists ∆ such that the following holds forall ∆ ≥ ∆ . For every α ∈ ((1 + δ ) / ∆ , / , there exists λ > λ c (∆) so thatthere is an approximation-preserving reduction from HC( λ, ∆) to IS ′ ( α, ∆) . This implies Theorem 2 (b).

References [1] V. L. Alev and L. C. Lau. Improved analysis of higher order random walksand applications. In

Proceedings of the 52nd Annual ACM SIGACT Symposiumon Theory of Computing , pages 1198–1211, Chicago IL USA, June 2020. ACM. doi :10.1145/3357713.3384317.[2] N. Anari, K. Liu, and S. O. Gharan. Spectral independence in high-dimensionalexpanders and applications to the hardcore model. In , pages 1319–1330, Nov. 2020. doi :10.1109/FOCS46700.2020.00125.[3] A. Barvinok.

Combinatorics and Complexity of Partition Functions , volume 30of

Algorithms and Combinatorics . Springer International Publishing, Cham, 2016. doi :10.1007/978-3-319-51829-9.[4] N. Bhatnagar, A. Sly, and P. Tetali. Decay of correlations for the hardcore modelon the d -regular random graph. Electronic Journal of Probability , 21(0), 2016. doi :10.1214/16-EJP3552.[5] R. Bubley and M. Dyer. Path coupling: A technique for proving rapid mixing inMarkov chains. In

Proceedings 38th Annual Symposium on Foundations of Com-puter Science , pages 223–231, Miami Beach, FL, USA, 1997. IEEE Comput. Soc. doi :10.1109/SFCS.1997.646111.[6] Z. Chen, K. Liu, and E. Vigoda. Optimal mixing of Glauber dynamics: Entropy factor-ization via high-dimensional expansion. arXiv preprint , Nov. 2020, arXiv:2011.02075.[7] Z. Chen, K. Liu, and E. Vigoda. Rapid mixing of Glauber dynamics up to uniquenessvia contraction. In , pages 1307–1318, Nov. 2020. doi :10.1109/FOCS46700.2020.00124.[8] R. Curticapean, H. Dell, F. Fomin, L. A. Goldberg, and J. Lapinskas. A fixed-parameter perspective on

Algorithmica , 81(10):3844–3864, 2019.[9] J. Cutler and A. Radcliffe. The maximum number of complete subgraphs in a graphwith given maximum degree.

Journal of Combinatorial Theory, Series B , 104:60–71,Jan. 2014. doi :10.1016/j.jctb.2013.10.003.[10] E. Davies, M. Jenssen, and W. Perkins. A proof of the Upper Matching Conjecturefor large graphs. arXiv preprint , Apr. 2020, arXiv:2004.06695.[11] E. Davies, M. Jenssen, W. Perkins, and B. Roberts. On the average size of indepen-dent sets in triangle-free graphs.

Proc. Amer. Math. Soc. , 146(1):111–124, July 2017. doi :10.1090/proc/13728.[12] E. Davies, M. Jenssen, W. Perkins, and B. Roberts. Tight bounds on the coefficientsof partition functions via stability.

Journal of Combinatorial Theory, Series A , 160:1–30, Nov. 2018. doi :10.1016/j.jcta.2018.06.005.[13] M. Dyer, L. A. Goldberg, C. Greenhill, and M. Jerrum. The relative com-plexity of approximate counting problems.

Algorithmica , 38(3):471–500, 2004. doi :10.1007/978-3-642-04016-0.[14] T. Eden, D. Ron, and C. Seshadhri. On approximating the number of k -cliques in sublinear time. SIAM Journal on Computing , 49(4):747–771, Jan. 2020. doi :10.1137/18M1176701.[15] F. Eisenbrand and F. Grandoni. On the complexity of fixed parameter cliqueand dominating set.

Theoretical Computer Science , 326(1):57–67, Oct. 2004. doi :10.1016/j.tcs.2004.05.009. [16] A. Galanis, D. ˇStefankoviˇc, and E. Vigoda. Inapproximability of the partition functionfor the antiferromagnetic Ising and hard-core models.

Combinatorics, Probability andComputing , 25(4):500–559, July 2016. doi :10.1017/S0963548315000401.[17] B. V. Gnedenko. On a local limit theorem of the theory of probability.

Uspehi Matem.Nauk (N. S.) , 3(3(25)):187–194, 1948.[18] C. Greenhill. The complexity of counting colourings and independent setsin sparse graphs and hypergraphs.

Comput. complex. , 9(1):52–72, Nov. 2000. doi :10.1007/PL00001601.[19] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: An approachto approximate counting and integration. In

Approximation Algorithms for NP-HardProblems . PWS Pub. Co, 1997.[20] F. P. Kelly. Stochastic models of computer communication systems.

Journal of theRoyal Statistical Society. Series B (Methodological) , 47(3):379–395, 1985.[21] J. Lebowitz, B. Pittel, D. Ruelle, and E. Speer. Central limit theorems, Lee–Yangzeros, and graph-counting polynomials.

Journal of Combinatorial Theory, Series A ,141:147–183, July 2016. doi :10.1016/j.jcta.2016.02.009.[22] M. Michelen and J. Sahasrabudhe. Central limit theorems and the geometry of poly-nomials. arXiv preprint , Aug. 2019, arXiv:1908.09020.[23] M. Michelen and J. Sahasrabudhe. Central limit theorems from the roots ofprobability generating functions.

Advances in Mathematics , 358:106840, Dec. 2019. doi :10.1016/j.aim.2019.106840.[24] V. Patel and G. Regts. Deterministic polynomial-time approximation algorithms forpartition functions and graph polynomials.

SIAM J. Comput. , 46(6):1893–1919, Jan.2017. doi :10.1137/16M1101003.[25] H. Peters and G. Regts. On a conjecture of Sokal concerning roots ofthe independence polynomial.

Michigan Math. J. , 68(1):33–55, Apr. 2019. doi :10.1307/mmj/1541667626.[26] A. Sinclair and M. Jerrum. Approximate counting, uniform generation and rapidlymixing Markov chains.

Information and Computation , 82(1):93–133, July 1989. doi :10.1016/0890-5401(89)90067-9.[27] A. Sly. Computational transition at the uniqueness threshold. In , pages 287–296, Las Vegas,NV, USA, Oct. 2010. IEEE. doi :10.1109/FOCS.2010.34.[28] A. Sly and N. Sun. Counting in two-spin models on d-regular graphs.

Ann. Probab. ,42(6):2383–2416, Nov. 2014. doi :10.1214/13-AOP888.[29] L. G. Valiant. The complexity of enumeration and reliability problems.

SIAM J.Comput. , 8(3):410–421, Aug. 1979. doi :10.1137/0208032.[30] D. Weitz. Counting independent sets up to the tree threshold. In

Proceedings of theThirty-Eighth Annual ACM Symposium on Theory of Computing - STOC ’06 , page140, Seattle, WA, USA, 2006. ACM Press. doi :10.1145/1132516.1132538.[31] Y. Zhao. Extremal regular graphs: independent sets and graph homo-morphisms.

The American Mathematical Monthly , 124(9):827–843, 2017. doi :10.4169/amer.math.monthly.124.9.827.

Department of Computer Science, University of Colorado Boulder, USA

Email address : [email protected] Department of Mathematics, Statistics, and Computer Science, Universityof Illinois Chicago, USA

Email address ::

Related Researches

The Multiplicative Version of Azuma's Inequality, with an Application to Contention Analysis
by William Kuszmaul
Balanced Districting on Grid Graphs with Provable Compactness and Contiguity
by Cyrus Hettle
Deterministic Tree Embeddings with Copies for Algorithms Against Adaptive Adversaries
by Bernhard Haeupler
A Dynamic Data Structure for Temporal Reachability with Unsorted Contact Insertions
by Luiz F. Afra Brito
Semi-Streaming Algorithms for Submodular Matroid Intersection
by Paritosh Garg
Multivariate Analysis of Scheduling Fair Competitions
by Siddharth Gupta
Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond rank-one updates
by De Huang
Online Bin Packing with Predictions
by Spyros Angelopoulos
Minimum projective linearizations of trees in linear time
by Lluís Alemany-Puig
Parameterized complexity of computing maximum minimal blocking and hitting sets
by Júlio Araújo
A 2 -Approximation Algorithm for Flexible Graph Connectivity
by Sylvia Boyd
A Faster Algorithm for Finding Closest Pairs in Hamming Metric
by Andre Esser
Kernelization of Maximum Minimal Vertex Cover
by Júlio Araújo
Fractionally Log-Concave and Sector-Stable Polynomials: Counting Planar Matchings and More
by Yeganeh Alimohammadi
Optimal Construction of Hierarchical Overlap Graphs
by Shahbaz Khan
Gapped Indexing for Consecutive Occurrences
by Philip Bille
CountSketches, Feature Hashing and the Median of Three
by Kasper Green Larsen
A Refined Analysis of Submodular Greedy
by Ariel Kulik
Generalized Parametric Path Problems
by Prerona Chatterjee
Approximate Privacy-Preserving Neighbourhood Estimations
by Alvaro Garcia-Recuero
Coalgebra Encoding for Efficient Minimization
by Hans-Peter Deifel
Algorithms and Complexity on Indexing Founder Graphs
by Massimo Equi
A Linear Time Algorithm for Constructing Hierarchical Overlap Graphs
by Sangsoo Park
Density Sketches for Sampling and Estimation
by Aditya Desai
A Survey on Consortium Blockchain Consensus Mechanisms
by Wei Yao

  • «
  • 1
  • 2
  • 3
  • 4
  • »
Submitted on 9 Feb 2021 Updated

arXiv.org Original Source
NASA ADS
Google Scholar
Semantic Scholar
How Researchain Works
Researchain Logo
Decentralizing Knowledge