Dynamic Complexity of Expansion
aa r X i v : . [ c s . CC ] A ug Dynamic Complexity of Expansion
Samir Datta [email protected]
Anuj Tawari [email protected]
Yadu Vasudev [email protected]
August 14, 2020
Abstract
Dynamic Complexity was introduced by Immerman and Patnaik[PI97] (see also [DST95]). It has seen a resurgence of interest in the re-cent past, see [DHK14,ZS15,MVZ16,BJ17,Zeu17,DKM + + + +
20] for some representative examples.Use of linear algebra has been a notable feature of some of these papers.We extend this theme to show that the gap version of spectral expan-sion in bounded degree graphs can be maintained in the class
DynAC (also known as DynFO , for domain independent queries) under batchchanges (insertions and deletions) of O ( log n log log n ) many edges.The spectral graph theoretic material of this work is based on thepaper by Kale-Seshadri [KS11]. Our primary technical contributionis to maintain up to logarithmic powers of the transition matrix of abounded degree undirected graph in DynAC . Computational complexity conventionally deals with problems in which theentire input is given to begin with and does not change with time. How-ever, in practice, the input is not always static and may undergo frequentchanges with time. For instance, one may want to efficiently update theresult of a query under insertion or deletion of tuples into a database. Insuch a scenario, recomputing the solution from scratch after every updatemay be unnecessarily computation intensive. In this work, we deal withproblems whose solution can be maintained by one of the simplest possiblemodels of computation: polynomial size boolean circuits of bounded depth.1he resulting complexity class
DynAC is equivalent to Pure SQL in com-putational power when we think of graphs (and other structures) encodedas a relational database. It is also surprisingly powerful as witnessed by theresult showing that DynAC is strong enough to maintain transitive closurein directed graphs [DKM + DynAC . In the dynamic (graph) model we start with an empty graph on a fixed setof vertices. The graph evolves by the insertion/deletion of a single edge inevery time step and some property which can be periodically queried, has tobe maintained by an algorithm. The dynamic complexity of the algorithmis the static complexity for each step. If the updates and the queries can beexecuted in a static class C the dynamic problem is said to belong to Dyn C .In this paper, C is often a complexity class defined in terms of boundeddepth circuits such as AC , TC , where AC is the class of polynomial sizeconstant depth circuits with AND and OR gates of unbounded fan-in; TC circuits may additionally have MAJORITY gates. We encourage the readerto refer to any textbook (e.g. Vollmer [Vol99]) for precise definitions ofthe standard circuit complexity classes. The model was first introduced byImmerman and Patnaik [PI97] (see also Dong, Su, and Topor [DST95]) whodefined the complexity class DynFO which is essentially equivalent to theuniform version of DynAC . The circuit versions DynAC , DynTC were alsoinvestigated by Hesse and others [Hes03, DHK14].The archetypal example of a dynamic problem is maintaining reacha-bility (“is there a directed path from s to t ”), in a digraph. This problemhas recently [DKM +
18] been shown to be maintainable in the class
DynAC - a class where edge insertions, deletions and reachability queries can bemaintained using AC circuits. This answers an open question from [PI97]. We will have occasion to refer to the ( dlogtime -)uniform versions of these circuit classesand we adopt the convention that, whenever unspecified, we mean the uniform version. More precisely, the two classes are equivalent for all domain independent queries i.e.queries for which the answer to the query is independent of the size of the domain of thestructure under question. We will actually conflate
DynFO with
DynFO ( <, + , × ) in whichclass order < ) and the corresponding addition (+) and multiplication ( × ) are built inrelations. We do so because we need to deal with multiple updates where the presence ofthese relations is particularly helpful – see the discussion in [DMVZ18]. We do not needto care about these subtler distinctions when we deal with DynAC in any case. O ( log n log log n ) (see [DMVZ18]). In this work, we study expansion testing underbatch changes of size similar to above. In this paper we study the dynamic complexity of checking the expansionof a bounded-degree graph under edge updates. A bounded-degree graph G is an expander if its second-largest eigenvalue λ G is bounded away from 1.Expanders are a very useful class of graphs with a variety of applications inalgorithms and computational complexity, for instance in derandomization.This arises due to the many useful properties of an expander such as the factthat an expander has no small cuts and that random walks on an expandermixes well.Our aim is to dynamically maintain an approximation of the secondlargest eigenvalue of a dynamically changing graph in DynAC . We show thatfor a graph G , we can answer if the second largest eigenvalue of the graph isless than a parameter α (meaning that G is a good expander) or if λ G > α ′ where α ′ is polynomially related to α . The study of a related promiseproblem of testing expansion was initiated in the sparse model of propertytesting by Goldreich and Ron [GR11], and testers for spectral expansionby Kale and Seshadri [KS11] and vertex expansion by Czumaj and Sohler[CS10] are known.Our algorithm is borrowed from the property testing algorithm of [KS11]where it is shown that if λ G ≤ α , then random walks of logarithmic lengthfrom every vertex in G will converge to the uniform distribution. On thecontrary if λ G ≥ α ′ then this is not the case for at least one vertex in G .The key technical contribution in the paper is a method to maintain thelogarithmic powers of the normalized adjacency matrix of a dynamic graphwhen there are few edge modifications. The Kale-Seshadri algorithm [KS11] estimates the collision probability ofseveral logarithmically long random walks using the lazy transition matrixfrom a small set of randomly chosen vertices. It uses these to give a prob-abilistically robust test for the gap version of conductance. We would liketo extend this test to the dynamic setting where the graph evolves slowlyby insertion/deletion of small number of edges. Moreover, in our dynamiccomplexity setting the metric to measure the algorithm is not the sequential3ime but parallel time using polynomially many processors. Thus it sufficesto maintain the collision probabilities in constant parallel time with polyno-mially many processors to be able to solve the gap version of conductancein
DynAC .This brings us to our main result: Theorem 1. (Dynamic Expansion test)
Given the promise that thegraph remains bounded degree (degree at most d ) after every round of up-dates, Expansion testing can be maintained in DynAC under O ( log n log log n ) changes. In other words, we need to maintain the generating function of at mostlogarithmic length walks of a transition matrix when the matrix is changedby almost logarithmiclly many edges in one step. The algorithm is based ona series of reductions, from the above problem ultimately to two problems –integer determinant of an almost logarithmic matrix modulo a small primeand interpolation of a rational polynomial of polylogarithmic degree. Eachreduction is in the class AC . Moreover if there are errors in the originaldata the errors do not increase after a step. On the other hands the entriesthemselves lengthen in terms of number of bits. To keep them in controlwe have to truncate the entries at every step increasing the error. We cancontinue to use these values for a number of steps before the error growstoo large. Then we use a matrix that contains the required generating func-tion computed from scratch. Unfortunately this “from scratch” computationtakes logarithmically many steps. But by this time O ( log n log log n ) changes haveaccumulated. Since we deal with almost logarithmic many changes in log-arithmically many steps by working at twice the speed in other words the AC circuits constructed will clear off two batches in one step and thus areof twice the height. Using this, we catch up with the current change inlogarithmically many steps. Hence, we spawn a new circuit at every timestep which will be useful logarithmically many steps later.The crucial reductions are as follows: • (Lemma 21) Dynamically maintaining the aforesaid generating func-tion reduces to powering an almost logarithmic matrix of univariatepolynomials to logarithmic powers by adapting (the proof of) a methodby Hesse [Hes03]. See Section 3 for a formal definition. O ( log n log log n ) to be precise – for us the term “almost logarithmic” is a shorthand forthis. (Lemma 24) Logarithmically powering an almost logarithmically sizedmatrix reduces to powering a collection of similar sized matrices butto only an almost logarithmic power using the Cayley-Hamilton theo-rem along. This further requires the computation of the characteristicpolynomial via an almost logarithmic sized determinant and interpo-lation. • (Lemma 25) To compute M i for i smaller than the size of M , weconsider the power series ( I − zM ) − and show that we can use inter-polation and small determinants (of triangular matrices) to read offthe small powers of M from it. • (Lemma 28) We reduce rational determinant to integer determinantmodulo p . We invoke a known result from [DMVZ18] to place this in AC .Since interpolation of polylogarithmic degree polynomials is in AC , thisrounds off the reductions and the outline of the proof of: Theorem 2. (Main technical result: informal)
Let T be an n × n dynamic transition matrix, in which, there are at most O ( log n log log n ) changes ina step. Then we can maintain in DynAC , a matrix ˜ T such that | ˜ T − T log n | < n ω (1) . The conductance of a graph, also referred to as the uniform sparsest cut inmany works, is an important metric of the graph. Many algorithms havebeen designed for approximating the uniform sparsest cut in the static set-ting [ACL07,She09,ST13,Mad10,KRV09]. This naturally raises the questionof maintaining an approximate value of the conductance in a dynamic graphsubject to frequent edge changes.In the Ph.d Thesis of Goranci [Gor19], a sequential dynamic incremen-tal algorithm (only edge insertions allowed) with polylogarithmic approx-imation and sublinear worst-case update time. In [GRST20], the authorsgive a fully dynamic algorithm (both edge insertions and deletions allowed)with slightly sublinear approximation and polylogarithmic amortized up-date time. On the other hand, our work gives a fully dynamic algorithm ina parallel setting.Another difference is that the algorithm in [Gor19, GRST20] outputs anapproximate value of the conductance while our algorithm only solves thegap version. 5here has also been significant related work on investigating the dynamiccomplexity of problems like rank, reachability and matching under singleedge changes [Hes03,DHK14,DKM +
18] and under batch changes [DMVZ18,DKM + We start by putting down a convention we have already been using. Werefer by almost logarithmic (in n ) a function that grows like O ( log n log log n ). The primary circuit complexity class we will deal with is AC , consistingof languages recognisable by a Boolean circuit family with ∧ , ∨ -gates ofunbounded fan-in along with ¬ -gates of fan-in one where the size of thecircuit is a polynomial in the length of the input and crucially the depthof the circuit is a constant (independent of the input). Since we are moreinterested in providing AC -upper bounds our circuits will be Dlogtime-uniform. There is a close connection between uniform AC and the formallogic class FO – to the extent that [BIS90] show that the version FO ( ≤ , + , × )is essentially identical to AC . We will henceforth not distinguish betweenthe two.The goal of a dynamic program is to answer a given query on an inputgraph under changes that insert or delete edges. We assume that the numberof vertices in the graph are fixed and initially the number of edges in thegraph is zero. Of course, we assume an encoding for the graph as a stringand any natural encoding works.The complexity of the dynamic program is measured by a complexityclass C (such as AC ) and those queries which can be answered constitutethe class Dyn C . In other words the (circuit) class C can handle each updategiven some polynomially many stored bits.Traditionally the changes under which this can be done was fixed to onebut recently [DMVZ18, DKM +
20] this has been extended to batch changes.In this work we will allow nonconstantly many batch changes (of cardinality O ( log n log log n )).One technique which has proved important for dealing with batch changesis a form of pipelining suited for circuit/logic classes called “muddling”[DMS +
19, SVZ18]. Suppose we have a static parallel circuit A of non-constant depth that can process the input to a form from where the query isanswerable easily and in addition we have a dynamic program P consisting6f constant depth circuits that can maintain the query but the correctness ofthe results is guaranteed for only a small number of batches. This situationmay arise if e.g. the dynamic program uses an approximation in computingthe result and the ensuing errors add up across several steps making the re-sults useless after a while. On the other hand the static circuit does precisecomputation always but takes too much depth.We need to construct a circuit that is of depth constant per batch ofchanges but is promised to work for arbitrarily many batches. The idea isto use a copy of the circuit A to process the current input to a form wherequeries can be answered easily. However, by the time this happens the inputis stale in that d ( A ) (the depth of the circuit class) times the batch size manychanges are not included. Now we use the dynamic program P to handlea leftover batch and the currently arriving batch in one unit of time overthe next d ( A ) time steps. This will allow the program to catch up with thebacklog and allow it to deliver the 2 d ( A ) result in the 2 d ( A )-th time step.Since the total depth of the circuit involved in this is O ( d ( A )), the averagedepth remains constant. By starting a new static circuit at every time step,that will deliver the then correct result after 2 d ( A )-steps, we are done. Tosummarise in our particular case (an adapted and modified version of the“muddling” lemmas from [DMS +
19, DMVZ18]) we have the following:
Lemma 3.
Let M be a matrix with b = O (log n ) -bit rational entries. Sup-pose we have two routines available: • An algorithm A that can compute M log n by an AC circuit. • A dynamic program P specified by an AC -circuit that can approxi-mately maintain M log n under batch changes of size l = O ( log n log log n ) for Ω(log n ) batches.Then, we have an AC -circuit that will approximately maintain M log n underbatch changes of size l for arbitrarily many batches.Proof. Suppose the circuit for A has depth c A log n and the circuit for P has depth c P . Then we will show how to construct a circuit C t at time t of depth d = ( c A + 2 c P ) log n = c log n that will compute the value M log n which is correct at the time t + log n . At a time there are log n circuitsextant viz. C t − log n +1 , C t − log n +2 , . . . , C t which will deliver the correct valueof M log n at times t + 1 , t + 2 , . . . , t + log n respectively. Since the size ofeach circuit C i is polynomial in n so is the size s c ( n ) of c layers of C i . Thus,we can think that each layer of the overall circuit consists of c layers ofeach of C t − log n +1 , . . . , C t of total size s c ( n ) log n per layer i.e. it is an AC circuit. 7ext we describe the algorithm A that works in AC . Notice that two n × n matrices with entries that are rationals with at most polynomialin n bits each can be multiplied in TC (see e.g. [HAB02, Vol99]). Henceraising a matrix A to the log n -th power can be done by TC -circuits of depth O (log log n ) (by repeated squaring). We also know that TC is a subset of NC [Vol99] which in turn has AC -circuits of depth O ( log n log log n ) (just cut upthe circuit into NC -circuits of depth log log n and expand each subcircuit intoa DNF-formula of size 2 log log N = n – thus overall we get a depth reductionby a factor of log log n at the expense of a linear blowup in size). Now bysubstituting these AC -circuit in the TC -circuits of depth log log n we get an AC circuit. Thus we get: Lemma 4.
Let A be an n × n matrix with rational entries. The entriesare represented with n bits of precision each. Then computing A ℓ , where ℓ = O (log n ) , is in AC . In this section, we present some basic results about logarithmic space com-putations which will be useful to us. First, we show that reachability ingraphs can be decided by bounded depth boolean circuits of subexponentialsize.
Lemma 5.
Given an input graph G with | V ( G ) | = n and two fixed vertices s and t , there is a circuit of depth d and size n n /d which can decide if thereis a path from s to t in G . See [COST16], pg. 613 for a proof.In the following, we denote by A ≤ l the words in the language A that areof length at most l . Lemma 6.
Suppose A ∈ L is a language. Then for constant c > , A ≤ log c n has an AC circuit of depth O (1) and size n O (1) .Proof. Undirected reachability is L -hard under first order reductions byCook and McKenzie [CM87]. Hence A reduces to undirected reachability byfirst order reductions. Thus given N , we can construct an undirected graph G N of size some N k , and two vertices s, t thereof using first order formulassuch that for all w of length at most N , w ∈ A iff s, t are connected in G N . But Lemma 5 tells us that there exists a (very-uniform) AC -circuitof size N kN k/d and depth 2 d that determines connectivity in G N . Taking N = log c n , the size of the circuit becomes 2 kc log log n log kc/d n . Now pick8 = kc + 1 then the size becomes sublinear in n (because the exponent issublogarithmic). In this section, we are interested in the problem of maintaining expansionin a dynamically updating bounded degree graph. For a degree-boundedgraph G , let λ G denote the second largest eigenvalue of the normalizedadjacency matrix of G . First, we define the problem of interest, which wecall Expansion Testing : Definition 7. ( Expansion testing ) Given a graph G , degree bound d , anda parameter α , decide whether λ G ≤ α or λ G ≥ α ′ where α ′ = 1 − (1 − α ) / . In this section, we aim to prove the following theorem:
Theorem 8. (Dynamic Expansion test)
Given the promise that thegraph remains bounded degree (degree at most d ) after every round of up-dates, Expansion testing can be maintained in
DynAC under O ( log n log log n ) changes. Our algorithm is based on Kale and Seshadri’s work on testing expansionin the property testing model [KS11]. Our algorithm differs from theirs inthat we are working on a dynamic graph, and the major technical challenge isan efficient way to maintain the powers of the normalized adjacency matrix.In this section, we will describe the algorithm and its correctness. In thesubsequent sections, we will detail the method to update the power of thenormalized adjacency matrix when a small number of entries change.To prove the theorem, we will first look at the conductance of a graph G . For a vertex cut ( S, S ) with | S | ≤ n/
2, the conductance of the cut is theprobability that one step of the lazy random walk leaves the set S . We willdenote by Φ G ( S ) the conductance of the cut. Formally, Φ G ( S ) = | E ( S,S ) | d | S | .The conductance of the graph Φ G is the minimum of Φ G ( S ) over all vertexcuts ( S, S ). The following inequality between the conductance of a graph andthe second largest eigenvalue will be useful in our analysis (see [HLW06]).1 − Φ G ≤ λ G ≤ − Φ G . For a d -degree-bounded graph G , we will think of G as a 2 d -regular graphwhere each vertex v ∈ V has 2 d − d ( u ) self-loops.9he main idea behind the algorithm in [KS11] is to perform many lazyrandom walks of length k = O (log n ) from a fixed vertex s and count thenumber of pairwise collisions between the endpoints of these walks. A lazy random walk on a graph from a vertex v , chooses a neighbor uniformly atrandom with probability 1 / d and chooses to stay at v with probability1 − d ( v ) / d . We can compute exactly the probability that two differentrandom walks starting at s collide at their endpoints by computing S s = P u ∈ [ n ] T ℓ [ s ][ u ] · T ℓ [ s ][ u ], where T is a transition matrix of the graph. Since T is symmetric, the matrix T k must be a symmetric matrix. Then S s isequal to the ( s, s ) entry of the matrix T k . Hence, it suffices to maintainthe ( s, s ) entry of the matrix T k .To analyze the lazy random walks in our setting, we will look at tran-sition matrices T such that T [ v, v ] = 1 − d ( v ) / d for every v ∈ V and T [ u, v ] = d ( u ) / d for every edge ( u, v ) ∈ G . Notice that it is equivalent to arandom-walk on a 2 d -regular graph, where each vertex u with degree d ( u )has 2 d − d ( u ) self-loops, and therefore we can use the lemma stated aboveon the graph.For a vertex v ∈ G , let π ℓv denote the distribution over V of lazy randomwalks of length ℓ starting from v . The distance of this distribution from thestationary distribution (which is uniform in this case), denoted by D ℓ ( v ) isgiven by D ℓ ( v ) = X u ∈ V (cid:18) π ℓv ( u ) − n (cid:19) = X u ∈ V π ℓv ( u ) − n . Observe that P u ∈ V π ℓv ( u ) = T ℓ [ v, v ] as shown in the lemma above. Wenow state a technical lemma about the existence of vertex v such that D ℓ ( v )is high if the graph has low conductance. Lemma 9 ( [KS11]) . For a graph G ( V, E ) , let S ⊂ V be a set of size s ≤ n/ such that the cut ( S, S ) has conductance less than δ . Then, for any integer l > , there exists a vertex v ∈ S such that D l ( v ) > √ s (1 − δ ) l . We can now describe our algorithm for testing expansion. After eachupdate, we use Theorem 35 to obtain the matrix ˜ T such that | ˜ T − T k | ≤ /n ,where k = log n/ Φ . Therefore for each v , we ˜ T ( v, v ) such that | ˜ T [ v, v ] − P u ∈ V π ℓv ( u ) | ≤ /n . We now test if ˜ T [ v, v ] ≤ n (cid:0) n (cid:1) for each v ∈ G ,10nd reject if this is not the case even for one v ∈ G . The correctness of thisalgorithm follows from the two lemmas stated below. Lemma 10. If λ G ≤ α , then ˜ T [ v, v ] ≤ n (cid:0) n (cid:1) for every v ∈ G .Proof. If λ G ≤ α , then Φ G ≥ − α = Φ. Now, D ℓ ( v ) = k π ℓv − n k ≤ n . Therefore, T l [ v, v ] = P u ∈ V π ℓv ( u ) ≤ n (cid:0) n (cid:1) . Since | ˜ T [ v, v ] − T ℓ [ v, v ] | ≤ /n , we have ˜ T [ v, v ] ≤ n (cid:0) n (cid:1) for every v ∈ V . Lemma 11. If λ G ≥ α ′ , then there exists a vertex v ∈ G such that ˜ T [ v, v ] > n (cid:0) n (cid:1) .Proof. If λ G ≥ α ′ , then we know that Φ G ≤ k Φ . Therefore, there exists avertex cut ( S, S ) such that Φ G ( S ) ≤ k Φ . From Lemma 9 we can concludethat there exists a vertex v such that D ℓ ( v ) > s (1 − k Φ ) ℓ ≥ n (1 − k Φ ) ℓ . For ℓ = ln n/ , and a sufficiently small k <
1, we have D ℓ ( v ) > n ǫ for a small constant ǫ >
0. The collision probability P u ∈ V π ℓv ( u ) istherefore at least n (cid:0) n ǫ (cid:1) . From Theorem 35, we know that ˜ T [ v, v ] ≥ n (cid:0) n ǫ (cid:1) > n (cid:0) n (cid:1) .The key ingredient in the algorithm is a procedure to maintain the log-arithmic powers of a weighted adjacency matrix when only a small numberof entries change. In the next section we will describe how to do this in DynAC . We are given an n × n lazy-transition matrix T that varies dynamically withthe batch insertion/deletion of almost logarithmically ( O ( logn log log n )) manyedges per time step. We want to maintain each entry of sum of powers: P log ni =0 ( xT ) i . Notice that the exponent log n arises from the Kale-Seshadriexpansion-testing algorithm which needs the probabilities of walks of lengthlog n . On the other hand, the almost logarithmic bound on the small num-ber of changes is a consequence of the reductions described below from thedynamic problem above to ultimately, determinants of small matrices andinterpolation of small degree polynomials. Here interpolation can be donefor degrees up to polylogarithmic but known techniques [DMVZ18] permit11eterminants of at most almost logarithmic size in AC yielding this bottle-neck. Another way to view this bottleneck is: while from Lemmata 5, 6,polylogarithmic length inputs of languages in L (or even NL : see [DMVZ18])can be decided in AC , such bounds are not known for languages reducibleto determinants. Definition 12. A b -bit rational is a pair consisting of an integer α a naturalnumber β such that | α | < β ≤ b . Its value is αβ . By a mild abuse of notationwe conflate the pair ( α, β ) with its value αβ . Remark 13.
First, notice that every b -bit rational is smaller than bydefinition. Second, a B -bit approximation ˜ r to a rational r may itself be a b -bit rational for some b = B . This is because the two statements | r − ˜ r | ≤ − B and ˜ r = αβ where | α | < β ≤ b are independent. We need some definitions and begin with the definition of a dynamicmatrix and the associated problems of maintaining dynamic matrix powers.
Definition 14.
Let l ∈ N . A matrix A ∈ Q n × n [ x ] is said to be ( n, d, b, l ) -dynamic if: • each coefficient of the polynomials is a b -bit rational • at every step there is a change in the entries of some l × l submatrixof A to yield a new matrix A ′ . The change matrix ∆ A = A ′ − A Definition 15. DynMatPow ( n, d, b, k, l ) is the problem of maintaining thevalue of each entry of P ki =0 ( xA ) i for a ( n, d, b, l ) -dynamic matrix.Let DynBipMatPow ( n, d, b, k, l ) be the special case of DynMatPow ( n, d, b, k, l ) where the change matrix ∆ A has a support that is a bipartite graph with alledges from one bipartition to another. Next, we define problems to which the dynamic problems will be reducedto. We begin with polynomial matrix powering. The last condition in thefollowing bounding the constant term of entries of the powered matrix is atechnical one for controlling the error.
Definition 16.
Let
MatPow ( n, d, b, k ) be the problem of determining P ki =0 ( xA ) k for a matrix A ∈ Q n × n [ x ] where all the following hold: • the degree of the polynomials is upper bounded by d • each coefficient is a b -bit rational the constant term of each polynomial entry is upper bounded by (3 n ) − The next group of definitions involve the problems we ultimately re-duce the intermediate matrix powering algorithm to. THese include variousdeterminant problems, polynomial interpolation and polynomial division.
Definition 17.
Let
Det ( n, b, v ) be the problem of computing the value of thedeterminant of an n × n matrix with entries that are b -bit rationals boundedby v < in magnitude.Let Det p ( n ) be the problem of computing the value of the determinantof an n × n matrix with entries that are from Z p for a prime p .Let DetPoly ( n, d, b ) be the problem of computing the value of the deter-minant of an n × n matrix with entries that are degree d polynomials of b -bitrational coefficients. Definition 18.
Let
Interpolate ( d, b ) be the problem of computing the co-efficients of a univariate polynomial of degree d , where the coefficients arerationals (not necessarily smaller than one) and where d + 1 evaluations ofthe polynomial on b -bit rationals are given. Definition 19.
Let
Div ( n, m, b ) the problem of computing the quotient ofa univariate polynomial g ( x ) of degree n when it is divided by a polynomial f ( x ) of degree m where both polynomials are monic with other entries being b -bit rationals. In the rest of this section, we will use the following variables consistently: • n number of nodes in the graph • l = O ( log n log log n ) the number of changes in one batch • k = O (log n ) the exponent to which we want to raise the transitionmatrix • b = log O (1) n the number of bits in the repersentation • d ≤ log O (1) n the degree of a polynomialLet us start with the first lemma above: Lemma 20. DynMatPow ( n, d, b, k, l ) reduces to DynBipMatPow (2 n, d, b, k, l ) via a local AC -reduction. That is, changing a “small” submatrix of the input dynamic matrix results in “small”small submatrix change in the output matrix of the reduction. The notion of smallnessbeing almost logarithmic. roof. Let A be an ( n, d, b, l )-dynamic matrix. Let B be the following 2 × Q n × n : (cid:18) n AI n n (cid:19) . Here 0 n , I n are respectively the n × n all zeroes, identity matrices. Thenclearly, B k = (cid:18) A k n n A k (cid:19) Notice that: B ′ − B = (cid:18) n A ′ − A n n (cid:19) , is a directed bipartite graph with all edges from the first partition of n vertices to the second partition of n vertices, completing the proof. Let G be a weighted directed graph with a weight function w : E → R + andweighted adjacency matrix A . Let H = H ( k ) G ( x ) denote the weighted graphwith weighted adjacency matrix A H = P ki =0 ( xA ) i where k is an integerand x is a formal (scalar) variable. Let G ′ be a graph on the vertices of G differing from G in a “few” edges and A ′ be its adjacency matrix. Denote by∆ A = A ′ − A . Notice that ∆ A contains both positive and negative entries.Let ∆ + A be the matrix consisting of the positive and ∆ − A of the negativeentries of ∆ A . Let U be the affected vertices i.e. the vertices on which anyof the inserted/deleted edges in ∆ A (i.e. the support of the edges whoseadjacency matrices are ∆ + A and − ∆ − A ) are incident. Lemma 21.
Suppose there exists a partition of the affected vertices into twosets U i , U o such that all inserted and deleted edges are from a vertex in U i to a vertex in U o . Consider the matrices ∆ σ , for σ ∈ { + , −} , of dimension | U | + 2 , viewed as a weighted adjacency matrix of a graph on U ∪ { s, t } where s, t ∈ V ( G ) \ U , and whose entries are defined as below: ∆ σ [ u, v ] = σw uv x if u ∈ U i and v ∈ U o A H [ u, v ] if u ∈ U o and v ∈ U i A H [ s, v ] if u = s and v ∈ U i A H [ u, t ] if v = t and u ∈ U o otherwise hen the number of s, t walks in G ′ of length k ≤ ℓ are given by thecoefficient of x k in A H [ s, t ] + ∆ kσ [ s, t ] .Proof. We will prove this separately for the cases σ = + and σ = − . Theproof follows the general strategy of Hesse’s proof in [Hes03].When σ = +, we insert edges into the graph G . When new edges areadded to G , the total number of walks from s to t is the sum of the numberof walks that are already present and the new walks due to the insertion ofthe new edges. Observe that all the s − t walks in the graph on U ∪ { s, t } must pass through the new edges and every such walk of length l is countedexactly once in ∆ k + [ s, t ].The more interesting case is when σ = − , and edges are deleted from G .In this case we A H [ s, t ] contains all walks from s to t including the deletededges, and we need to delete only those walks that contain at least one edgethat is deleted. The proof follows along the same lines as Hesse’s proof whena single edge is deleted. The idea is to show that every walk from s to t oflength l is counted exactly once in A H [ s, t ] + ∆ k − [ s, t ].Let P be any s − t walk in G that contains edges that are deleted. Firstly, P is counted exactly once in A H [ s, t ]. Suppose that r of the deleted edgesoccur in P and the i th edge occurs k i times. Among the k i occurrencesof the i th edge we can choose l i occurrences, for each i , and this gives awalk where these are the edges from U i to U o that we choose in graph on U ∪ { s, t } , and the remaining are counted in the walk from s to U i , U i to U o and U o to t . There are Q ri =1 (cid:0) k i l i (cid:1) such choices, and for each choice thecorresponding summand for the walk in ∆ k − [ s, t ] is ( − l + l + ··· + l r . When l = l = . . . = l r = 0, the walk is counted in A H [ s, t ]. Therefore, thecontribution of the walk P to the sum is given by k X l =0 k X l =0 · · · k r X l r =0 ( − l + l + ··· + l r r Y i =1 (cid:18) k i l i (cid:19) = k X l =0 k X l =0 · · · k r X l r =0 r Y i =1 ( − l i (cid:18) k i l i (cid:19) = r Y i =1 k i X l i =0 ( − l i (cid:18) k i r i (cid:19) = 0 . Since the walks that do not pass through the deleted edges never appearin the new graph on U ∪ { s, t } that we created and are hence counted in A H [ s, t ], this completes the proof for the case σ = − .From the lemma above, we can conclude the following reduction.15 emma 22. DynBipMatPow (2 n, d, b, k, l ) reduces to MatPow ( l, d, b, k ) via an AC -reduction.Proof. Let A denote the (2 n, d, b, l )-dynamic matrix such that the supportof the changes is a bipartite graph. Let A H = P ki =0 ( xA ) i . From Lemma 21,we know that if l entries of A change, then the ( s, t ) entry in the new sum, A H ′ [ s, t ], can be computed in two steps, first by computing A H [ s, t ]+∆ k − [ s, t ]to obtain A H ′′ [ s, t ] and then computing A H ′ [ s, t ] as A H ′′ [ s, t ] + ∆ k + [ s, t ]. Thelemma follows from these observations.In the following lemma, we analyze the error incurred in the matrix A H ′ due to error in the matrix A H : Lemma 23.
Let ˜ A H be a b -bit approximation of the matrix A H , then thecorresponding matrix ˜ A H ′ obtained from ˜ A H is a b − -bit approximation of A H ′ .Proof. Let ˜ A H = A H − E where E denotes an error-matrix with each entrya polynomial with coefficients upper-bounded by 1 / b . Each entry in ˜ A H isrepresented by a d -degree polynomial with b -bit rational coefficents.We can compute ˜ A H ′ [ s, t ] = ˜ A H [ s, t ] + ˜∆ kσ [ s, t ], where ˜∆ σ can be con-structed from ˜ A H ′ . We can write ˜∆ σ [ s, t ] = ∆ kσ [ s, t ] − E ′ [ s, t ], where E ′ isan error matrix consisting of polynomials of degree at most d . We will nowshow that the coefficients of these polynomials are upper-bounded by 1 / b .First observe that every entry of ∆ σ is a degree d polynomial with coeffi-cients at most 1 /
2. We will bound the term corresponding to ∆ kσ and theremainder separately. Since each entry of ∆ σ is at most 1 /
2, we can boundthe first term by k k b . The remainder of the sum can be upper boundedby k k b . Therefore, each coefficient of the polynomials of this matrix isbounded by k k b + k k b ≤ / b .Therefore, we can write ˜ A H ′ [ s, t ] = ˜ A H [ s, t ] + ˜∆ kσ [ s, t ] = A H [ s, t ] +∆ kσ [ s, t ] − E ′′ [ s, t ] where E ′′ is an error matrix with each entry boundedby 1 / b − . We first need to reduce the exponent from logarithmic to almost logarith-mic. The following lemma in fact reduces it from polylogarithmic to almostlogarithmic.
Lemma 24. MatPow ( l, d, b, k ) AC -reduces to the conjunction of the fol-lowing: atPow ( l, , lb, l ) , DetPoly ( l, , lb + 2 k d ) , Interpolate ( dk, kl b + k ld ) and Div ( k, l, l b + 2 k ld )We use a trick (see e.g. [ABD14,HV06]; notice that the treatment is simi-lar but not identical to that in [ABD14] because there we had to power onlyconstant sized matrices) to reduce large exponents to exponents boundedby the size of the matrix for any matrix powering problem via the Cayley-Hamilton theorem (see e.g. Theorem 4, Section 6.3 in Hoffman-Kunze[HK71]). Proof.
Given a matrix M ∈ Q l × l [ x ], let M i be the value of the polynomialmatrix M with rationals x i substituted instead of x , for i ∈ { , . . . , dk } . Here x , . . . , x dk are dk + 1 distinct, sufficiently small rationals (say x i = i (3 dk ) ).Let χ M i ( z ) denote its characteristic polynomial det ( zI − M i ). We write z k = q i ( z ) χ M i ( z ) + r i ( z ) for unique polynomials q i , r i such that deg ( r i ) 1. Finally computing M k reduces to interpolatingeach entry from the corresponding entries of M ki .Now we analyse this algorithm. First we evaluate the matrix at dk + 1points x , x , . . . , x dk where x i = i (3 dk ) . This yields a matrix M i whoseentries are bounded by k + P dkj =1 i j (3 dk ) − j < k + P dj =1 (3 k ) − j < k + k − < k − < (3 l ) − in magnitude.We then compute the characteristic polynomial of M i . Notice that thevalue of det ( zI − M i ) is a monic polynomial with coefficient of z l − j boundedby j ! (cid:0) lj (cid:1) (3 l ) − j < j > 0. Suppose, teh denominator of an entry of M is bounded by β . Then The denominator of M i is bounded by β (3 dk ) dk .Moreover, the denominator of this coefficient is further bloated to at most β l (3 dk ) ldk ≤ lb +2 k d (where we use that l log 3 dk ≈ k ). Thus this corre-sponds to an instance of DetPoly ( l, , lb + 2 k d ).In the next step, we divide z k by the characteristic polynomial of M i , χ M i ( z ). This corresponds to an instance of Div ( k, l, l b + 2 k ld ).For computing the evaluation of the remainder polynomial on an M i , weneed to power an l × l matrix M i with lb -bit rational entries to exponentsbounded by at most l − 1. This can be accomplished by MatPow ( l, , lb, l )by recalling that the each entry of M i is bounded by (3 l ) − .Finally, we obtain M k by interpolation. Every entry of M k is a polyno-mial of degree at most dk . Every coefficient of this polynomial is an kb -bit17ational and moreover the evaluation on entries of r i ( M li ) are given whichare kl b + 2 k ld -bit i.e. via Interpolate ( dk, kl b + 2 k ld ).Next, we reduce almost logarithmic powers of almost logarithmic sizedmatrices to almost logarithmic sized determinants of polynomials. Lemma 25. MatPow ( l, d, b, l ) AC -reduces to the conjunctions of DetPoly ( l, , lb ) , Det ( l + 1 , lb, ( l + 1) − ) , Interpolate ( dl, lb ) Proof. Let A ( j ) be the univariate polynomial matrix A = A ( x ) evaluatedat point x = x j where x , . . . , x dl are dl + 1 distinct rationals say x j = j (3 dl ) . Consider the infinite power series p ( s,t,j ) ( z ) = ( I − zA ( j ) ) − [ s, t ].( I − zA ( j ) ) − = P ∞ i =0 z i ( A ( j ) ) i . Thus p ( s,t,j ) ( z ) is the generating function of( A ( j ) ) i [ s, t ] parameterised on i . p ( s,t,j ) ( z ) can be also be written, by Cramer’srule (See, for example, Section 5.4, p. 161, Hoffman-Kunze [HK71]). as theratio of two determinants – the numerator being the determinant of the( t, s )-th minor of ( I − zA ( j ) ), say D ( s,t,j ) ( z ) and the denominator being thedeterminant D ( j ) ( z ) of I − zA ( j ) . Thus p ( s,t,j ) ( z ) = D ( s,t,j ) ( z ) D ( j ) ( z ) . In otherwords, D ( s,t,j ) ( z ) = p ( s,t,j ) ( z ) D ( j ) ( z ). Now let us compare the coefficients of z i on both sides : D ( s,t,j ) i = i X k =0 p ( s,t,j ) k D ( j ) i − k Letting, i run from 0 to degree of D ( j ) which is l = dimension of A , we get l + 1 equations in the l + 1 unknowns p ( s,t,j ) i for i ∈ { , . . . , l } , j ∈ { , . . . , d } .Equivalently, this can be written as the matrix equation: M ( j ) π = d ( j ) where M ( j ) is an ( l + 1) × ( l + 1) matrix with entries M ( j ) ik = D ( j ) i − k , for0 ≤ k ≤ i ≤ l and zero for all other values of i, k lying in { , . . . , l } .Similarly, d ( j ) is a vector with entries d ( j ) k = D ( s,t,j ) k and π the vector with l +1 unknowns π k = p ( s,t,j ) k again for i, k ∈ { , . . . , l } . Notice that specificallyin this argument, indices of matrices/vectors start at 0 instead of 1 forconvenience.Next, we show that the matrix M is invertible. We make the trivial butcrucial observation: Observation 26. The constant term in D ( j ) ( z ) = det ( I − zA ( j ) ) is . This implies that: Here we use the convention that a i denotes the coefficient of z i in a ( z ), where a ( z ) isa power series (or in particular, a polynomial). roposition 27. M ( j ) is a lower triangular matrix with all principal diag-onal entries equal to hence has determinant . Next we can interpolate the values of A i [ s, t ] from the values of ( A ( j ) ) i [ s, t ] =[ z i ] p ( s,t,j ) for i ∈ { , . . . , l } , j ∈ { , . . . , dl (3 dl ) } .‘Now we analyse this algorithm. First, we evaluate the matrix at dl +1 distinct rationals. Each entry of the j -th matrix is now bounded by l + P dli =1 j i (3 dl ) − i < ( l + 1) − . We then compute determinant of thematrix I − zA ( j ) where A ∈ Q l × l . The number of bits in the denomina-tors are less than 2 bl and moroever the values of coefficient of z j is lessthan j ! (cid:0) lj (cid:1) ( l + 1) − j < j > 0. Thus the coefficents are lb -bit rationals.Hence, computing the determinant of I − zA ( j ) corresponds to an instanceof DetPoly ( l, , lb ).The next step is to compute the inverse of a ( l + 1) × ( l + 1) matrix M ( j ) above. The ( a, b ) entry of M ( j ) is either zero or equals the coefficientof z a − b in a cofactor of ( I − zA ( j ) ). By a logic similar to that used in theproof of Lemma 24 these are all lb -bit rationals. Further, a crude upperbound on their values is ( l + 1) − as above. Thus we get an instance of Det ( l + 1 , lb, ( l + 1) − ).Finally, we find the matrix A i from the values of ( A ( j ) ) i . This corre-sponds to an instance of Interpolate ( dl, lb ). First we reduce the problem of computing almost logarithmic determinantsof small polynomials to computing almost logarithmic determinants oversmall rationals. Lemma 28. DetPoly ( l, d, b ) AC -reduces to the conjunction of Interpolate ( dl, lb ) , Det ( l, lb, ( l + 1) − ) .Proof. Here, we need to compute the determinant of an l × l matrix witheach entry a degree ≤ d polynomial with b bit coefficients. Clearly, thedeterminant is a degree at most ld polynomial. So, we plug in ld +1 differentvalues x , x , . . . , x ld , where x i = i (3 ld ) into the determinant polynomial.This yields a determinant with entries bounded by ( l + 1) − in magnitudeas in the proof of Lemma 24.The next step is to interpolate a degree at most ld polynomial. Thecoefficient of x m where m ≤ ld is bounded by 1 as in the previous lemmas,and the number of bits is at most lb as well.19hen we show how to compute small rational determinants by using Chi-nese Remaindering and the computation of determinants over small fields. Lemma 29. For every c > , Det ( log n log log n , log c n, v ) ∈ AC Proof. Let b = log c n . The basic idea is to use the Chinese RemainderTheorem, CRT (see e.g. [HAB02]) with prime moduli that are of magnitudeat most log c +1 n to obtain the determinant of 2 b A which is an integer matrix(since the entries are b -bit rationals of magnitude at most 2 b ). For n O (1) primes this problem is solvable in TC by [HAB02] and hence in L . Thus, byLemma 6 it is in AC for primes of magnitude polylogarithmic. We of courseneed to compute the determinants modulo the small primes for which weuse Lemma 31.Notice that in the definition of Det , the third argument, that is v , isneeded for error analysis which is done in the following lemma. For ourpurposes, v ≤ l +1 . Lemma 30. Let A be a l × l matrix with entries that are b -bit rationalssmaller than l − . Let ˜ A be a l × l matrix each of whose entries is a B -bitapproximation to the corresponding entry of A . Then assuming B = Ω( l ) , det ( ˜ A ) is a B -bit approximation to det ( A ) .Proof. Difference between corresponding monomials in the two determinantsis easily seen to be upper bounded by 2 − B l − ( l − in magnitude. Notice thatthe assumption B = Ω( l ) tacitly implies that we can neglect all monomialsthat that contain more than one term of magnitude 2 − B and just need toconsider the terms that consist of exactly one 2 − B and the rest being theactual entries. Hence the (signed) sum over all monomial (differences) isupper bounded by 2 − B in magnitude. Lemma 31. (Paraphrased from Theorem 8 [DMVZ18]) If p ∈ O ( n c ) is aprime then, Det p ( log n log log n ) ∈ AC . We use a slight modification of the Kung-Sieveking algorithm as describedin [ABD14,HV06]. The algorithm in [ABD14] worked over finite fields whilehere we apply it to divide polynomials of small heights and degrees overrationals. The algorithm and its proof of correctness follows Lemma 7 from[ABD14] in verbatim. We reproduce the relevant part for completeness (withminor emendments to accommodate for the characteristic):20 emma 32. Let g ( x ) of degree n and f ( x ) of degree m be monic univariatepolynomials over Q [ x ] , such that g ( x ) = q ( x ) f ( x ) + r ( x ) for some polyno-mials q ( x ) of degree ( n − m ) and r ( x ) of degree ( m − . Then, given thecoefficients of g and f , the coefficients of r can be computed in TC . Inother words Div ( n, m, b ) ∈ TC if m < n and b = n O (1) .Proof. Let f ( x ) = P mi =0 a i x i , g ( x ) = P ni =0 b i x i , r ( x ) = P m − i =0 r i x i and q ( x ) = P n − mi =0 q i x i . Since f, g are monic, we have a m = b n = 1. De-note by f R ( x ) , g R ( x ) , r R ( x ) and q R ( x ) respectively the polynomial withthe i -th coefficient a m − i , b n − i , r m − i − and q n − m − i respectively. Then notethat x m f (1 /x ) = f R ( x ), x n g (1 /x ) = g R ( x ), x n − m q (1 /x ) = q R ( x ) and x m − r (1 /x ) = r R ( x ).We use the Kung-Sieveking algorithm (as implemented in [ABD14]). Thealgorithm is as follows:1. Compute ˜ f R ( x ) = P n − mi =0 (1 − f R ( x )) i via interpolation.2. Compute h ( x ) = ˜ f R ( x ) g R ( x ) = c + c x + . . . + c d ( n − m )+ n x m ( n − m )+ n .from which the coefficients of q ( x ) can be obtained as q i = c m ( n − m )+ n − i .3. Compute r ( x ) = g ( x ) − q ( x ) f ( x ).The proof of correctness of the algorithm is identical to that in [ABD14].The proof of the lemma is immediate because polynomial product is in TC from [HAB02]. Lemma 33. Div ( k, l, b ) AC -reduces to Interpolate ( kl, kb ) Proof. In the first step of the algorithm from the proof of Lemma 32, we needto interpolate a polynomial of degree at most ( k − l ) l . Also, the coefficients ofthe polynomial are rationals with rationals that are ( k − l ) b bits long. Noticethat we do not require the coefficients of the polynomial to be smaller than1. The following lemma shows that no precision is lost during each call to Interpolate . Lemma 34. Let f ( z ) be a polynomial of degree d with entries that are ra-tionals not necessarily smaller than . Suppose, z i = i (3 d ) for i ∈ { , . . . , d } are d + 1 values. If we know B -bit approximations ˜ f i to the values f ( z i ) ,then the interpolant of these values is a function ˜ f whose coefficients are atleast B -bit approximations of the corresponding coefficients of f . roof. (Lagrange) interpolation can be viewed as computing V − F where V is a ( d + 1) × ( d + 1) Vandermonde matrix , such that V ij = z ji while F i = f ( z i ) are entries of a column vector. The determinant of the Vandermondematrix is Q ≤ j
Theorem 35. Let T be an ( n, log n, log n, log n log log n ) -dynamic adjacency ma-trix. Then we can maintain in DynAC , a matrix ˜ T such that | ˜ T − T log n | < n ω (1) .Proof. We use the reductions presented in Lemmas 20, 22, 24, 25, 28 and We assume that the indices of the Vandermonde matrix run in { , . . . , d } and that0 = 1 for convenience. 223 to prove the result. DynMatPow ( n, d, b, k, l )( Lemma ≤ AC DynBipMatPow (2 n, d, b, k, l )( Lemma ≤ AC MatPow ( l, d, b, k )( Lemma ≤ AC MatPow ( l, , lb, l ) ∧ DetPoly ( l, , lb + 8 k d ) ∧ Interpolate (2 dk, kl b + 8 k ld ) ∧ Div (2 k, l, l b + 8 k ld )( Lemma ≤ AC DetPoly ( l, , lb ) ∧ Det ( l + 1 , lb, ( l + 1) − ) ∧ Interpolate (0 , lb ) ∧ DetPoly ( l, , lb + 8 k d ) ∧ Interpolate (2 dk, kl b + 8 k ld ) ∧ Div (2 k, l, l b + 8 k ld )( Lemma ≤ AC Interpolate ( dl, ldb ) ∧ Det ( l, ldb, ( l + 1) − ) ∧ Interpolate (2 dk, kl b + 8 k ld ) ∧ ( Lemma Interpolate (2 kl, kl b + 8 k ld ) ≡ Interpolate (2 log n, 10 log n (log log n ) ) ∧ Det ( log n log log n , log n (log log n ) , ( log n log log n + 1) − )Each DynMatPow call boils down to a number of Det , Interpolate calls as above.Though there is no loss of precision in each call to Det (Lemma 30) and Interpolate (Lemma 34), in Lemma 22 we lose O (1)-bits of precision (SeeLemma 23). However, the length of the bit representation grows by a factorof 9 log n log log n at every batch. Thus to keep the number of bits under controlwe need to truncate the matrix at log n -bits again so that now the poweredmatrix is now a log n − O (1)-bit approximation. This O (1) will deteriorateat every step so that we can afford to perform at least Ω(log n ) steps before23e recompute the results from scratch, that is, do muddling.We can do muddling by invoking Lemma 3 where we pick A to be thealgorithm from Lemma 4 and with the above sequence of reductions as thedynamic program P for handling a batch (or actually two batches – one oldand one new) of changes. In this paper we solve a gap version of the expansion testing problem,wherein we want to test if the expansion is greater than α or less thana α ′ . The dependence of α ′ on α in this paper is due to the approach ofusing random walks and testing the conductance. It is natural to ask ifthere is an alternate method leading to a better dependence. A more nat-ural question is whether we can maintain an approximation of the secondlargest eigenvalue of a dynamic graph.An alternative direction of work would be to improve on the number ofupdates allowed per round. In this paper, we show how to test expansionwhen almost logarithmic ( O ( log n log log n )) changes are allowed per round. Thelargest determinant that we can compute in AC is at most of an almostlogarithmic size and is the bottle neck that prevents us from improving thisbound. We know of another way (obtained via careful adaptation of a proofin [Nis94]) to approximate the powers of the transition matrix when log O (1) n changes are allowed per round. Unfortunately, we don’t get a strong enoughapproximation that leads to an algorithm for approximating conductance. Acknowledgements SD would like to thank Anish Mukherjee, Nils Vortmeier and Thomas Zeumefor many interesting and illuminating conversations over the years and inparticular for discussions that ultimately crystallized into Lemma 21. Wewould like to thank Eric Allender for clarification regarding previous work.SD was partially funded by a grant from Infosys foundation and SERB-MATRICS grant MTR/2017/000480. AT was partially funded by a grantfrom Infosys foundation. References [ABD14] Eric Allender, Nikhil Balaji, and Samir Datta. Low-depth uni-form threshold circuits and the bit-complexity of straight line24rograms. In Erzs´ebet Csuhaj-Varj´u, Martin Dietzfelbinger, andZolt´an ´Esik, editors, Mathematical Foundations of ComputerScience 2014 - 39th International Symposium, MFCS 2014, Bu-dapest, Hungary, August 25-29, 2014. Proceedings, Part II , vol-ume 8635 of Lecture Notes in Computer Science , pages 13–24.Springer, 2014.[ACL07] Reid Andersen, Fan R. K. Chung, and Kevin J. Lang. Usingpagerank to locally partition a graph. Internet Mathematics ,4(1):35–64, 2007.[BIS90] David A. Mix Barrington, Neil Immerman, and Howard Straub-ing. On uniformity within nc . J. Comput. Syst. Sci. , 41(3):274–306, 1990.[BJ17] Patricia Bouyer and Vincent Jug´e. Dynamic complexity of thedyck reachability. In Foundations of Software Science and Com-putation Structures - 20th International Conference, FOSSACS2017, Held as Part of the European Joint Conferences on The-ory and Practice of Software, ETAPS 2017, Uppsala, Sweden,April 22-29, 2017, Proceedings , pages 265–280, 2017.[BRZ18] Pablo Barcel´o, Miguel Romero, and Thomas Zeume. A moregeneral theory of static approximations for conjunctive queries.In , pages 7:1–7:22, 2018.[CM87] Stephen A. Cook and Pierre McKenzie. Problems complete fordeterministic logarithmic space. J. Algorithms , 8(3):385–394,1987.[COST16] Xi Chen, Igor Carboni Oliveira, Rocco A. Servedio, and Li-YangTan. Near-optimal small-depth lower bounds for small distanceconnectivity. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium onTheory of Computing, STOC 2016, Cambridge, MA, USA, June18-21, 2016 , pages 612–625. ACM, 2016.[CS10] Artur Czumaj and Christian Sohler. Testing expansion inbounded-degree graphs. Combinatorics, Probability & Comput-ing , 19(5-6):693–709, 2010.25DHK14] Samir Datta, William Hesse, and Raghav Kulkarni. Dynamiccomplexity of directed reachability and other problems. In Au-tomata, Languages, and Programming - 41st International Col-loquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014,Proceedings, Part I , pages 356–367, 2014.[DKM + 18] Samir Datta, Raghav Kulkarni, Anish Mukherjee, ThomasSchwentick, and Thomas Zeume. Reachability is in dynfo. J.ACM , 65(5):33:1–33:24, 2018.[DKM + 20] Samir Datta, Pankaj Kumar, Anish Mukherjee, Anuj Tawari,Nils Vortmeier, and Thomas Zeume. Dynamic complexity ofreachability: How many changes can we handle? In Artur Czu-maj, Anuj Dawar, and Emanuela Merelli, editors, , volume 168 of LIPIcs , pages 122:1–122:19. SchlossDagstuhl - Leibniz-Zentrum f¨ur Informatik, 2020.[DMS + 19] Samir Datta, Anish Mukherjee, Thomas Schwentick, Nils Vort-meier, and Thomas Zeume. A strategy for dynamic programs:Start over and muddle through. Logical Methods in ComputerScience , 15(2), 2019.[DMVZ18] Samir Datta, Anish Mukherjee, Nils Vortmeier, and ThomasZeume. Reachability and distances under multiple changes.In Ioannis Chatzigiannakis, Christos Kaklamanis, D´aniel Marx,and Donald Sannella, editors, , volume 107 of LIPIcs . SchlossDagstuhl - Leibniz-Zentrum fuer Informatik, 2018.[DST95] Guozhu Dong, Jianwen Su, and Rodney W. Topor. Nonrecursiveincremental evaluation of datalog queries. Ann. Math. Artif.Intell. , 14(2-4):187–223, 1995.[Gor19] Gramoz Goranci. Dynamic graph algorithms and graphsparsification: New techniques and connections. CoRR ,abs/1909.06413, 2019.[GR11] Oded Goldreich and Dana Ron. On testing expansion inbounded-degree graphs. In Oded Goldreich, editor, Studies in omplexity and Cryptography. Miscellanea on the Interplay be-tween Randomness and Computation - In Collaboration with Li-dor Avigad, Mihir Bellare, Zvika Brakerski, Shafi Goldwasser,Shai Halevi, Tali Kaufman, Leonid Levin, Noam Nisan, DanaRon, Madhu Sudan, Luca Trevisan, Salil Vadhan, Avi Wigder-son, David Zuckerman , volume 6650 of Lecture Notes in Com-puter Science , pages 68–75. Springer, 2011.[GRST20] Gramoz Goranci, Harald R¨acke, Thatchaphol Saranurak, andZihan Tan. The expander hierarchy and its applications to dy-namic graph algorithms. CoRR , abs/2005.02369, 2020.[HAB02] William Hesse, Eric Allender, and David A. Mix Barrington.Uniform constant-depth threshold circuits for division and iter-ated multiplication. J. Comput. Syst. Sci. , 65(4):695–716, 2002.[Hes03] William Hesse. The dynamic complexity of transitive closure isin dyntc0. Theor. Comput. Sci. , 296(3):473–485, 2003.[HK71] Kenneth Hoffman and Ray Kunze. Linear algebra. EnglewoodCliffs, New Jersey , 1971.[HLW06] Shlomo Hoory, Nathan Linial, and Avi Wigderson. Ex-pander graphs and their applications. Bull. Amer. Math. Soc. ,43(04):439562, 2006.[HV06] Alexander Healy and Emanuele Viola. Constant-depth circuitsfor arithmetic in finite fields of characteristic two. In Bruno Du-rand and Wolfgang Thomas, editors, STACS 2006, 23rd AnnualSymposium on Theoretical Aspects of Computer Science, Mar-seille, France, February 23-25, 2006, Proceedings , volume 3884of Lecture Notes in Computer Science , pages 672–683. Springer,2006.[KRV09] Rohit Khandekar, Satish Rao, and Umesh V. Vazirani. Graphpartitioning using single commodity flows. J. ACM , 56(4):19:1–19:15, 2009.[KS11] Satyen Kale and C. Seshadhri. An expansion tester for boundeddegree graphs. SIAM J. Comput. , 40(3):709–720, 2011.27Mad10] Aleksander Madry. Fast approximation algorithms for cut-basedproblems in undirected graphs. In , pages 245–254. IEEEComputer Society, 2010.[MVZ16] Pablo Mu˜noz, Nils Vortmeier, and Thomas Zeume. Dynamicgraph queries. In ,pages 14:1–14:18, 2016.[Nis94] Noam Nisan. RL < = SC. Computational Complexity , 4:1–11,1994.[PI97] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dy-namic complexity class. J. Comput. Syst. Sci. , 55(2):199–209,1997.[She09] Jonah Sherman. Breaking the multicommodity flow barrier foro(vlog n)-approximations to sparsest cut. In , pages 363–372.IEEE Computer Society, 2009.[SSV + 20] Jonas Schmidt, Thomas Schwentick, Nils Vortmeier, ThomasZeume, and Ioannis Kokkinis. Dynamic complexity meets pa-rameterised algorithms. In , pages 36:1–36:17, 2020.[ST13] Daniel A. Spielman and Shang-Hua Teng. A local clusteringalgorithm for massive graphs and its application to nearly lineartime graph partitioning. SIAM J. Comput. , 42(1):1–26, 2013.[SVZ18] Thomas Schwentick, Nils Vortmeier, and Thomas Zeume. Dy-namic complexity under definable changes. ACM Trans.Database Syst. , 43(3):12:1–12:38, 2018.[Vol99] Heribert Vollmer. Introduction to Circuit Complexity - A Uni-form Approach . Texts in Theoretical Computer Science. AnEATCS Series. Springer, 1999.[Zeu17] Thomas Zeume. The dynamic descriptive complexity of k-clique. Inf. Comput. , 256:9–22, 2017.28ZS15] Thomas Zeume and Thomas Schwentick. On the quantifier-freedynamic complexity of reachability.