Coarse-Grained Complexity for Dynamic Algorithms
Sayan Bhattacharya, Danupon Nanongkai, Thatchaphol Saranurak
aa r X i v : . [ c s . CC ] J a n Coarse-Grained Complexity for Dynamic Algorithms
Sayan Bhattacharya ∗ Danupon Nanongkai † Thatchaphol Saranurak ‡ Abstract
To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments . These arguments rely on strong assumptions about specificproblems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries,dynamic algorithms still miss out some benefits and lessons from the traditional “coarse-grained”approach that relates together classes of problems such as P and NP. In this paper we initiate thestudy of coarse-grained complexity theory for dynamic algorithms. Below are among questionsthat this theory can answer.
What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model?
Aresearch program for proving polynomial unconditional lower bounds for dynamic OV in thecell-probe model is motivated by the fact that many conditional lower bounds can be shown viareductions from the dynamic OV problem (e.g. [Abboud, V.-Williams, FOCS 2014]). Since thecell-probe model is more powerful than word RAM and has historically allowed smaller upperbounds (e.g. [Larsen, Williams, SODA 2017; Chakraborty, Kamma, Larsen, STOC 2018]), itmight turn out that dynamic OV is easy in the cell-probe model, making this research directioninfeasible. Our theory implies that if this is the case, there will be very interesting algorithmicconsequences: If dynamic OV can be maintained in polylogarithmic worst-case update time inthe cell-probe model, then so are several important dynamic problems such as k -edge connectiv-ity, (1 + ǫ )-approximate mincut, (1 + ǫ )-approximate matching, planar nearest neighbors, Chan’ssubset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamicOV by, e.g., subgraph connectivity, single source reachability, Chan’s subset union, and 3-vs-4diameter. Lower bounds for k -edge connectivity via dynamic OV? The ubiquity of reductionsfrom dynamic OV raises a question whether we can prove conditional lower bounds for, e.g., k -edge connectivity, approximate mincut, and approximate matching, via the same approach.Our theory provides a method to refute such possibility (the so-called non-reducibility ). Inparticular, we show that there are no “efficient” reductions (in both cell-probe and word RAMmodels) from dynamic OV to k -edge connectivity under an assumption about the classes ofdynamic algorithms whose analogue in the static setting is widely believed. We are not aware ofany existing assumptions that can play the same role. (The NSETH of Carmosino et al. [ITCS2016] is the closest one, but is not enough.) To show similar results for other problems, one onlyneed to develop efficient randomized verification protocols for such problems. ∗ University of Warwick, UK † KTH Royal Institute of Technology, Sweden ‡ Toyota Technological Institute at Chicago, USA. Work partially done while at KTH Royal Institute of Technology. i ontents I Extended Abstract 1 P dy and NP dy . . . . . . . . . . . . . . . . . . . . . 52.2 NP dy -Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Dynamic Polynomial Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 Other Results and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Relationship to Fine-Grained Complexity . . . . . . . . . . . . . . . . . . . . . . . . 112.6 Complexity classes for dynamic problems in the word RAM model . . . . . . . . . . 12 NP dy -Completeness Proof 14 P dy and NP dy . . . . . . . . . . . . . . . . . . . . . . . 145.2 A complete problem in NP dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 II Basic Complexity Classes (in the Bit-probe Model) 22 P dy and NP dy P dy and NP dy . . . . . . . . . . . . . . . . . 28 PH dy . . . . . . . . . . . . . . . . . . . . . . 32 III NP dy -completeness (in the Bit-probe Model) 36
10 First Shallow Decision Tree: an intermediate problem 36 dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3610.2 NP dy -hardness of First-DT dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3710.3 Proof of Theorem 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3710.4 Proof of Corollary 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40ii NP dy -complete/hard Problems 43 NP dy -completeness of DNF dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4411.2 Reformulations of DNF dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4611.3 NP dy -hardness of Some Known Dynamic Problems . . . . . . . . . . . . . . . . . . . 47 IV Further Complexity Classes 48
12 Randomized Classes 4813 Search Problems 49
V A Coarse-grained Approach to Connectivity Problems. 51
14 Is dynamic connectivity in coNP dy ? 51 Conn dy ∈ coNP dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
15 Complexity of dynamic k -edge connecitivty 53 dy to k - EdgeConn dy . . . . . . . . . . . . . . . . . . . . . 5315.2 Proof outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5415.3 Reviewing Thorup’s reduction to k - MinTreeCut dy . . . . . . . . . . . . . . . . . . . . 5515.4 Proof of Lemma 15.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5515.5 k - MinTreeCut dy ∈ NP dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 VI Back to RAM 58
16 Completeness of DNF dy in RAM with Large Space 58 VII Appendices 60
A Extending our results to promise problems 60B Proofs: putting problems into classes 60
B.1
BPP dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60B.2 NP dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61B.3 PH dy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 C Reformulations of DNF dy : proof 67D An Issue Regarding Promise Problems and P dy -reductions 68 iii art I Extended Abstract
In a dynamic problem , we first get an input instance for preprocessing , and subsequently we haveto handle a sequence of updates to the input. For example, in the graph connectivity problem[HK99, KKM13], an n -node graph G is given to an algorithm to preprocess. Then the algorithmhas to answer whether G is connected or not after each edge insertion and deletion to G . (Somedynamic problems also consider queries . (For example, in the connectivity problem an algorithmmay be queried whether two nodes are in the same connected component or not.) Since queries canbe phrased as input updates themselves, we will focus only on updates in this paper (see Section 6for formal definitions and examples.) Algorithms that handle dynamic problems are known as dynamic algorithms . The preprocessing time of a dynamic algorithm is the time it takes to handlethe initial input, whereas the worst-case update time of a dynamic algorithm is the maximum timeit takes to handle any update. Although dynamic algorithms are also analyzed in terms of their amortized update times , we emphasize that the results in this paper deal only with worst-case updatetimes. A holy grail for many dynamic problems – especially those concering dynamic graphs underedge deletions and insertions – is to design algorithms with polylogarithmic update times . Fromthis perspective, the computational status of many classical dynamic problems still remain widelyopen. Example: Family of Connectivity Problems.
A famous example of a widely open questionis for the family of connectivity problems: (i) The problem of maintaining whether the inputdynamic graph is connected (the dynamic connectivity problem) admits a randomized algorithmwith polylogarithmic worst-case update time. It is an active, unsettled line of research to determinewhether it admits deterministic polylogarithmic worst-case update time (e.g. [Fre85, EGIN92, HK99,HdLT98, Tho00, PD06, KKM13, Wul17, NS17, NSW17]). (ii) The problem of maintaining whetherthe input dynamic graph can be disconnected by deleting an edge (the dynamic 2 -edge connectivity problem) admits polylogarithmic amortized update time [HdLT98, HRT18], but its worst-caseupdate time (even with randomization) remains polynomial [Tho01]. (iii) For dynamic k -edgeconnectivity with k ≥
3, the best update time – amortization and randomization allowed – suddenlyjumps to ˜ O ( √ n ) where ˜ O hides polylogarithmic terms. Indeed, it is a major open problem tomaintain a (1 + ǫ )-approximation to the value of global minimum cut in a dynamic graph inpolylogarithmic update time [Tho01]. Doing so for k -edge connectivity with k = O (log n ) is alreadysufficient to solve the general case.Other dynamic problems that are not known to admit polylogarithmic update times includeapproximate matching, shortest paths, diameter, max-flow, etc. [Tho01, San07]. Thus, it is naturalto ask: can one argue that these problems do not admit efficient dynamic algorithms? A traditionally popular approach to answer the question above is to use information-theoreticarguments in the bit-probe/cell-probe model . In this model of computation, all the operations arefree except memory accesses. (In more details, the bit-probe model concerns the number of bitsaccessed, while the cell-probe model concerns the number of accessed cells, typically of logarithmicsize.) Lower bounds via this approach are usually unconditional , meaning that it does not relyon any assumption. Unfortunately, this approach could only give small lower bounds so far; andgetting a super-polylogarithmic lower bound for any natural dynamic problem is an outstandingopen question is this area [LWY18]. 1ore recent advances towards answering this question arose from a new area called fine-grainedcomplexity . While traditional complexity theory (henceforth we refer to it as coarse-grained com-plexity ) focuses on classifying problems based on resources and relating together resulting classes(e.g. P and NP), fine-grained complexity gives us conditional lower bounds in the word RAM modelbased on various assumptions about specific problems. For example, assumptions that are partic-ularly useful for dynamic algorithms are the Strong Exponential Time Hyposis (SETH), whichconcerns the running time for solving SAT, and the Online Matrix-Vector Multiplication Conjec-ture (OMv), which concerns the running time of certain matrix multiplication methods (more at,e.g., [Pat10, AW14, HKNS15]). In sharp contrast to cell-probe lower bounds, these assumptionsoften lead to polynomial lower bounds in the word RAM model, many of which are tight.While the fine-grained complexity approach has led to many exciting lower bound results, thereare a number of traditional results in the static setting that seem to have no analogues in thedynamic setting. For example, one reason that makes the P = N P assumption so central inthe static setting is that proving and disproving it will both lead to stunning consequences: Ifthe assumption is false then hundreds of problems in NP and bigger complexity classes like thepolynomial hierarchy (PH) admit efficient algorithms; otherwise the situation will be the opposite. In contrast, we do not see any immediate consequence to dynamic algorithms if someone falsifiedSETH, OMv, or any other assumptions. As another example, comparing complexity classes allowsus to speculate on various situations such as non-reducibility (e.g. [AR05, KvM02, Con92]), theexistence of NP-intermediate problems [Lad75] and the derandomization possibilities (e.g. [IW97]).(See more in Section 4.) We cannot anticipate results like these in the dynamic setting without thecoarse-grained approach, i.e. by considering analogues of P, NP, BPP and other complexity classesthat are defined based on computational resources.
Our Main Contributions.
We initiate a systematic study of coarse-grained complexity theoryfor dynamic problems in the bit-probe/cell-probe model of computation. We now mention a coupleof concrete implications that follow from this study.Consider the dynamic Orthogonal Vector (OV) problem (see Definition 2.9). Lower bounds con-ditional on SETH for many natural problems (e.g. Subgraph connectivity, ST-reachability, Chan’ssubset union, 3-vs-4 Diameter) are based on reductions from dynamic OV [AW14]. This suggeststwo research directions: (I) Prove strong unconditional lower bounds for many natural problemsin one shot by proving a polynomial cell-probe lower bound for dynamic OV. (II) Prove lowerbounds conditional on SETH for the family of connectivity problems mentioned in the previouspage via reductions (in the word RAM model) from dynamic OV. Below are some questions aboutthe feasibility of these research directions that our theory can answer. We are not aware of anyother technique in the existing literature that can provide similar conclusions. (I) What if dynamic OV is easy in the cell-probe model?
For the first direction, there isa risk that dynamic OV might turn out to admit a polylogarithmic update time algorithm in thecell-probe model. This is because lower bounds in the word RAM model do not necessarily extendto the cell-probe model. For example, it was shown by Larsen and Williams [LW17] and later byChakraborty et al [CKL17] that the OMv conjecture [HKNS15] is false in the cell-probe model.Will all the efforts be wasted if dynamic OV turns out to admit polylogarithmic update time inthe cell-probe model? Our theory implies that this will also lead to a very interesting algorithmic For further consequences see, e.g., [Aar17, Coo06, Coo03] and references therein. An indirect consequence would be that some barriers were broken and we might hope to get better upper bounds.This is however different from when P=NP where many hard problems would immediately admit efficient algorithms.Note that some consequences of falsifying SETH have been shown recently (e.g. [ABDN18, Wil13, GIKW17, CDL + k -edge connectivity, (1+ ǫ )-approximate mincut, (1+ ǫ )-approximate matching, planar nearest neighbors, Chan’s subset union and 3-vs-4 diameter. Thesame conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, singlesource reachability, Chan’s subset union, and 3-vs-4 diameter (see Theorem 2.1). Thus, there willbe interesting consequences regardless of the outcome of this line of research.Roughly, we reach the above conclusions by proving in the dynamic setting an analogue of thefact that if P=NP (in the static setting), then the polynomial hierarchy (PH) collapses. This isdone by carefully defining the classes P dy , NP dy and PH dy as dynamic analogues of P, NP, and PH,so that we can prove such statements, along with NP dy -completeness and NP dy -hardness results fornatural dynamic problems including dynamic OV. We sketch how to do this in Section 2, Section 5. (II) Lower bounds for k -edge connectivity via dynamic OV? As discussed above, whetherdynamic k -edge connectivity admits polylogarithmic update time for k ∈ [3 , O (log n )] is a veryimportant open question. There is a hope to answer this question negatively via reductions (in theword RAM model) from dynamic OV. Our theory provides a method to refute such a possibility(the so-called non-reducibility ). First, note that any reduction from dynamic OV in the word RAMmodel will also hold in the (stronger) cell-probe model. Armed with this simple observation, weshow that there are no “efficient” reductions from dynamic OV to k -edge connectivity under anassumption about the complexity classes for dynamic problems in the cell-probe model, namely PH dy AM dy ∩ coAM dy (see Theorem 2.2). We defer defining the classes AM dy and coAM dy , butnote two things. (i) Just as the classes AM and coAM (where AM stands for Arthur-Merlin )extend NP in the static setting, the classes AM dy and coAM dy extend the class NP dy in a similarmanner. (ii) In the static setting it is widely believed that PH AM ∩ coAM because otherwisethe PH collapses. Roughly, the phrase “ efficient reduction " from problems X to Y refers to a wayof processing each update for problem X by quickly feeding polylogarithmic number of updates asinput to an algorithm for Y (see Section 8 for details). All reductions from dynamic OV in theliteraturethat we are aware of are efficient reductions. Remark.
We define our complexity classes in the cell-probe model, whereas the reductions fromdynamic OV are in the word RAM model. This does not make any difference, however, since anyreduction in the word RAM reduction continues to have the same guarantees in the (stronger)cell-probe model.To show a similar non-reducibility result for any problem X , one needs to prove that X ∈ AM dy ∩ coAM dy , which boils down to developing efficient randomized verification protocols for suchproblems. We explain this in more details in Section 2 and Section 12.We are not aware of any existing assumptions that can lead the same conclusion as above. Toour knowledge, the only conjecture that can imply results of this nature is the NondeterministicStrong Exponential Time Hypothesis (NSETH) of [CGI + k -edge connectivity that is not yet known to be true. (In particular, Theorem 2.2 follows fromthe fact that k -edge connectivity is in AM dy ∩ coAM dy . To use NSETH, we need to show that itis in NP dy ∩ coNP dy .) Moreover, even if such a property holds it would only rule out deterministicreductions since NSETH only holds for deterministic algorithms. Paper Organization.
Part I of the paper is organized as follows. Section 2 explains our contribu-tions in details, including the conclusions above and beyond. We discuss related works and futuredirections in Section 3, Section 4. An overview of our main NP dy -completeness proof is in Section 5.In Part II we formalize the required notions. Although some of them are tedious, they are crucialto get things right. Part III provides detailed proofs of our NP dy -completeness results. Part IV3xtends our study to other classes, including those with randomization. This is needed for thenon-reducibility result mentioned above, discussed in details in Part V.The focus of this paper is on the cell-probe model. Our results in the word RAM model aremostly similar. We discuss in more details in Part VI. We show that coarse-grained complexity results similar to the static setting can be obtained fordynamic problems in the bit-probe/cell-probe model of computation, provided the notion of “non-determinism” is carefully defined. Recall that the cell-probe model is similar to the word RAMmodel, but the time complexity is measured by the number of memory reads and writes (probes);other operations are free (see Section 6.2). Like in the static setting, we only consider decision dynamic problems, meaning that the output after each update is either “yes” or “no”. Note thefollowing remarks. • Readers who are familiar with the traditional complexity theory may wonder why we donot consider the Turing machine. This is because the Turing machine is not suitable forimplementing dynamic algorithms, since we cannot access an arbitrary tape location in O (1)time. There is no efficient algorithm even for a basic data structure like the binary searchtree. • Our results for decision problems extend naturally to promised problems which are usefulwhen we discuss approximation algorithms. We do not discuss promised problems here tokeep the discussions simple. • Readers who are familiar with the oblivious adversaries assumption for randomized dynamicalgorithms may wonder if we consider this assumption here. This assumption plays no role fordecision problems, since an algorithm that is correct with high probability (w.h.p.) under thisassumption is also correct w.h.p. without the assumption (in other words, its output revealsno information about its randomness). Because of this, we do not discuss this assumption inthis paper.We start with our main results which can be obtained with appropriate definitions of complexityclasses P dy ⊆ NP dy ⊆ PH dy for dynamic problems: These classes are described in details later. Fornow they should be thought of as analogues of the classes P, NP and PH (polynomial hierarchy). Theorem 2.1. ( P dy vs. NP dy ) Below, the phrase “efficient algorithms” refers to dynamic algo-rithms that are deterministic and require polylogarithmic worst-case update time and polynomialspace to handle a polynomial number of updates in the bit-probe/cell-probe model.1. The dynamic orthogonal vector (OV) problem is “ NP dy -complete", and there are a numberof dynamic problems that are “ NP dy -hard” in the sense that if P dy = NP dy , then they admit no efficient algorithms. These problems include decision versions of Subgraph connectivity,ST-reachability, Chan’s subset union, and 3-vs-4 Diameter (see Table 2, Table 3 for more).2. If P dy = NP dy then P dy = PH dy , meaning that all problems in PH dy (which contains theclass NP dy ) admit efficient algorithms. These problems include decision versions of k -edge Throughout the paper, we use the cell-probe and bit-probe models interchangeably since the complexity in thesemodels are the same up to polylogarithmic factors. onnectivity, (1 + ǫ ) -approximate Matching, (1 + ǫ ) -approximate mincut, Planar nearestneighbors, Chan’s subset union and 3-vs-4 Diameter (see Table 2, Table 4 for more). Thus, proving or disproving P dy = NP dy will both lead to interesting consequences: If P dy = NP dy , then many dynamic problems do not admit efficient algorithms. Otherwise, if P dy = NP dy ,then many problems admit efficient algorithms which are not known or even believed to exist. Remark.
We can obtain similar results in the word-RAM model, but we need a notion of “efficientalgorithms” that is slightly non-standard in that a quasi-polynomial preprocessing time is allowed.(In contrast, all our results hold in the standard cell-probe setting.) We postpone discussing word-RAM results to later in the paper to avoid confusions.As another showcase, our study implies a way to show non-reducibility , like below.
Theorem 2.2.
Assuming PH dy AM dy ∩ coAM dy , the k -edge connectivity problem cannot be NP dy -hard. Consequently, there is no “efficient reduction” from the dynamic Orthogonal Vector (OV)problem to k -edge connectivity. From the discussion in Section 1, recall that the k -edge connectivity problem is currently knownto admit a polylogarithmic amortized update time algorithm for k ≤
2, and a O ( √ n polylog( n ))update time algorithm for k ∈ [3 , O (log n )] . It is a very important open problem whether it admitspolylogarithmic worst-case update time. Theorem 2.2 rules out a way to prove lower bounds andsuggest that an efficient algorithm might exist.A more important point beyond the k -edge connectivity problem is that one can prove a similarresult for any dynamic problem X by showing that X ∈ AM dy ∩ coAM dy or, even better, X ∈ NP dy ∩ coNP dy . See Section 4 for some candidate problems for X . This is easier than showing adynamic algorithm for X itself. Thus, this method is an example of the by-products of our studythat we expect to be useful for developing algorithms and lower bounds for dynamic problems infuture. See Section 2.4 and Part V for more details. As noted in Section 1, we are not aware ofany existing technique that is capable of deriving a non-reducibility result of this kind.The key challenge in deriving the above results is to come up with the right set of definitionsfor various dynamic complexity classes. We provide some of these definitions and discussions here,but defer more details to later in the paper. P dy and NP dy Class P dy . We start with P dy , the class of dynamic problems that admit “efficient” algorithms inthe cell-probe model. For any dynamic problem, define its update size to be the number of bitsneeded to describe each update. Note that we have not yet defined what dynamic problems areformally. Such a definition is needed for a proper, rigorous description of our complexity classes,and can be found in the full version of the paper. For an intuition, it suffices to keep in mindthat most dynamic graph problems – where each update is an edge deletion or insertion – havelogarithmic update size (since it takes O (log n ) bits to specify an edge in an n -node graph). Definition 2.3 ( P dy ; brief) . A dynamic problem with polylogarithmic update size is in P dy ifit admits a deterministic algorithm with polylogarithmic worst-case update time for handling asequence of polynomially many updates. △ Technically speaking, (1 + ǫ )-approximate matching is a promised or gap problem in the sense that for some inputinstance all answers are correct. It is in promise − PH dy which is bigger than PH dy . We can make the same conclusionfor promised problems: If P dy = NP dy , then all problems in promise − PH dy admit efficient algorithms. P dy include connectivity on plane graphs and predecessor; for more,see Table 1. Note that one can define P dy more generally to include problems with larger updatesizes. Our complexity results hold even with this more general definition. However, since our resultsare most interesting for problems with polylogarithmic update size, we focus on this case in thispaper to avoid cumbersome notations. Class NP dy and nondeterminism with rewards. Next, we introduce our complexity class NP dy . Recall that in the static setting the class NP consists of the set of problems that admit effi-ciently verifiable proofs or, equivalently, that are solvable in polynomial time by a nondeterministicalgorithm. Our notion of nondeterminism is captured by the proof-verification definition where,after receiving a proof, the verifier does not only output YES/NO, but also a reward , which issupposed to be maximized at every step.Before defining NP dy more precisely, we remark that the notion of reward is a key for our NP dy -completeness proof. Having the constraint about rewards potentially makes NP dy containsless problems. Interestingly, all natural problems that we are aware of remains in NP dy even withthis constraint. This might not be a big surprise, when it is realized that in the static settingimposing a similar constraint about the reward does not make the class (static) NP smaller; seemore discussions below. We now define NP dy more precisely. Definition 2.4 ( NP dy ; brief) . A dynamic problem Π with polylogarithmic update size is in NP dy ifthere is a verifier that can do the following over a sequence of polynomially many updates: (i) afterevery update, the verifier takes the update and a polylogarithmic-size proof as an input, and (ii) aftereach update, the verifier outputs in polylogarithmic time a pair ( x, y ), where x ∈ { Y ES, N O } and y is an integer (representing a reward) with the following properties.
1. If the current input instance is an YES-instance and the verifier has so far always received aproof that maximizes the reward at every step, then the verifier outputs x = Y ES .2. If the current input instance is a NO-instance, then the verifier outputs x = N O regardlessof the sequence of proofs it has received so far. △ To digest the above definition, first consider the static setting. One can redefine the class NPfor static problems in a similar fashion to Definition 2.4 by removing the preprocessing part andletting the only update be the whole input. Let us refer to this new (static) complexity class as“ reward-NP ”. To show that a static problem is in reward-NP, a verifier has to output some rewardin addition to the YES/NO answer. Since usually a proof received in the static setting is a solutionitself, a natural choice for reward is the cost of the solution (i.e., the proof). For example, a “proof”in the maximum clique problem is a big enough clique, and in this case an intuitive reward wouldbe the size of the clique given as a proof. Observe that this is sufficient to show that max clique is inreward-NP. In fact, it turns out that in the static setting the complexity classes NP and reward-NPare equal . (Proof sketch: Let Π be any problem in the original static NP and V be a correspondingverifier. We extend V to V ′ which outputs x = Y ES/N O as V and outputs y = 1 as a reward if x = Y ES and y = 0 otherwise. It is not hard to check that V ′ satisfies conditions in Definition 2.4.)To further clarify Definition 2.4, we now consider examples of some well-known dynamic prob-lems that happen to be in NP dy . Later in the paper, we use x = 1 and x = 0 to represent x = Y ES and x = NO , respectively. xample 2.5 (Subgraph Detection) . In the dynamic subgraph detection problem, an n -node and k -node graphs G and H are given at the preprocessing, for some k = polylog( n ). Each update isan edge insertion or deletion in G . We want an algorithm to output YES if and only if G has H asa subgraph.This problem is in NP dy due to the following verifier: the verifier outputs x = Y ES if and onlyif the proof (given after each update) is a mapping of the edges in H to the edges in a subgraph of G that is isomorphic to H . With output x = Y ES , the verifier gives reward y = 1. With output x = N O , the verifier gives reward y = 0. Observe that the proof is of polylogarithmic size (since k = polylog( n )), and the verifier can calculate its outputs ( x, y ) in polylogarithmic time. Observefurther that the properties stated in Definition 2.4 are satisfied: if the current input instance is aYES-instance, then the reward-maximizing proof is a mapping between H and the subgraph of G isomorphic to H , causing the verifier to output x = Y ES ; otherwise, no proof will make the verifieroutput x = Y ES . △ The above example is in fact too simple to show the strength of our definition that allows NP dy to include many natural problems (for one thing, y is simply 0/1 depending on x ). The nextexample demonstrates how the definition allows us to develop more sophisticated verifiers for otherproblems. Example 2.6 (Connectivity) . In the dynamic connectivity problem, an n -node graph G is givenat the preprocessing. Each update is an edge insertion or deletion in G . We want an algorithm tooutput YES if and only if G is connected.This problem is in NP dy due to the following verifier. After every update, the verifier maintainsa forest F of G . A proof (given after each update) is an edge insertion to F or an ⊥ symbolindicating that there is no update to F . It handles each update as follows. • After an edge e is inserted into G , the verifier checks if e can be inserted into F withoutcreating a cycle. This can be done in O (polylog( n )) time using a link/cut tree data structure[ST81]. It outputs reward y = 0. (No proof is needed in this case.) • After an edge e is deleted from G , the verifier checks if F contains e . If not, it outputs reward y = 0 (no proof is needed in this case). If e is in F , the verifier reads the proof (given after e is deleted). If the proof is ⊥ it outputs reward y = 0. Otherwise, let the proof be an edge e ′ . The verifier checks if F ′ = F \ { e } ∪ { e ′ } is a forest; this can be done in O (polylog( n ))time using a link/cut tree data structure [ST81]. If F ′ is a forest, the verifier sets F ← F ′ and outputs reward y = 1; otherwise, it outputs reward y = − x = Y ES if and only if F is a spanning tree of G .Observe that if the verifier gets a proof that maximizes the reward after every update, the forest F will always be a spanning forest (since inserting an edge e ′ to F has higher reward than giving ⊥ as a proof). Thus, the verifier will always output x = Y ES for YES-instances in this case. It isnot hard to see that the verifier never outputs x = Y ES for NO-instances, no matter what proofit receives. △ In short, a proof for the connectivity problem is the maximal spanning forest. Since such proofis too big to specify and verify after every update, our definition allows such proof to be updatedover input changes. (This is as opposed to specifying the densest subgraph from scratch every timeas in Example 2.5.) Allowing this is crucial for most problems to be in NP dy , but create difficultiesto prove NP dy -completeness. We remedy this by introducing rewards.7ote that if there is no reward in Definition 2.4, then it is even easier to show that dynamicconnectivity and other problems are in NP dy . Having an additional constraint about rewardspotentially makes less problems verifiable. Luckily, all natural problems that we are aware ofthat were verifiable without rewards remain verifiable with rewards. Problems in NP dy includedecision/gap versions of (1+ ǫ )-approximate matching, planar nearest neighbor, and dynamic 3SUM;see Table 2 for more. The concept of rewards (introduced while defining the class NP dy ) will turn tobe crucial when we attempt to show the existence of a complete problem in NP dy . See Section 2.2and Section 5 for more details.It is fairly easy to show that P dy ⊆ NP dy , and we conjecture that P dy = NP dy . Previous nondeterminism in the dynamic setting.
The idea of nondeterministic dynamicalgorithms is not completely new. This was considered by Husfeldt and Rauhe [HR03] and theirfollow-ups [PD06, Pat08, Yin10, Lar, WY14], and has played a key role in proving cell-probe lowerbounds in some of these papers. As discussed in [HR03], although it is straightforward to define anondeterministic dynamic algorithm as the one that can make nondeterministic choices to processeach update and query, there are different ways to handle how nondeterministic choices affect thestates of algorithms which in turns affect how the algorithms handle future updates (called the “sideeffects” in [HR03]). For example, in [HR03] nondeterminism is allowed only for answering a query,which happens to occur only once at the very end. In [PD06], nondeterministic query answeringmay happen throughout, but an algorithm is allowed to write in the memory (thus change its state)only if all nondeterministic choices lead to the same memory state.In this paper we define a different notion of nondeterminism and thus the class NP dy . Itis more general than the previous definitions in that if a dynamic problem admits an efficientnondeterministic algorithm according to the previous definitions, it is in our NP dy . In a nutshell,the key differences are that (i) we allow nondeterministic steps while processing both updates andqueries and (ii) different choices of nondeterminism can affect the algorithm’s states in differentways; however, we distinct different choices by giving them different rewards. These differences allowus to include more problems to our NP dy (we do not know, for example, if dynamic connectivityadmits nondeterministic algorithms according to previous definitions). NP dy -Completeness Here, we sketch the idea behind our NP dy -completeness and hardness results. We begin by intro-ducing a problem is called dynamic narrow DNF evaluation problem (in short, DNF dy ), as follows. Definition 2.7 (DNF dy ; informally) . Initially, we have to preprocess (i) an m -clause n -variableDNF formula where each clause contains O (polylog( m )) literals, and (ii) an assignment of (boolean)values to the variables. Each update changes the value of one variable. After each update, we haveto answer whether the DNF formula is true or false. △ It is fairly easy to see that DNF dy ∈ NP dy : After each update, if the DNF formula happens tobe true, then the proof only needs to point towards one satisfied clause, and the verifier can quicklycheck if this clause is satisfied or not since it contains only O (polylog( m )) literals. Surprisingly, itturns out that this is also a complete problem in the classs NP dy . Theorem 2.8 ( NP dy -completeness of DNF dy ) . The DNF dy problem is NP dy -complete. This meansthat DNF dy ∈ NP dy , and if DNF dy ∈ P dy , then P dy = NP dy . Recall that a DNF formula is in the form C ∨· · ·∨ C m , where each “clause” C i is a conjunction (AND) of literals.
8o start with, recall the following intuition for proving NP-completeness in the static setting(e.g. [AB09, Section 6.1.2] for details): Since
Boolean circuits can simulate polynomial-time Turingmachine computation (i.e. P ⊆ P/ poly ), we view the computation of the verifier V for any problemΠ in NP as a circuit C . The input of C is the proof that V takes as an input. Then, determiningwhether there is an input (proof) that satisfies this circuit (known as CircuitSAT ) is NP-complete,since such information will allow the verifier to find a desired proof on its own. Attempting toextend this intuition to the dynamic setting might encounter the following roadblocks.1. Boolean circuits cannot efficiently simulate algorithms in the RAM model without losinga linear factor in running time.Furthermore, an alternative such as circuits with “indirectaddressing” gates seems useless, because this complex gate makes the model more complicated.This makes it more difficult to prove NP dy -hardness.2. Since the verifier has to work through several updates in the dynamic setting, the YES/NOoutput from the verifier alone is insufficient to indicate proofs that can be useful for future updates. For example, suppose that in Example 2.6 the connectivity verifier is allowed tooutput only x ∈ { Y ES, N O } , and we get rid of the concept of a reward. Consider a scenariowhere an edge e (which is part of F ) gets deleted from G , and G was disconnected evenbefore this deletion. In this case, the verifier can indicate no difference between having e ′ (i.e.finding a reconnecting edge) and ⊥ (i.e. doing nothing) as a proof (because it has to output x = 0 in both cases). Having e ′ as a proof, however, is more useful for the future, since ithelps maintain a spanning forest.It so happens that we can solve (ii) if the verifier additionally outputs an integer y as a reward.Asking more from the verifier makes less problems verifiable (thus smaller NP dy ). Luckily, all naturalproblems we are aware of that were verifiable without rewards remain verifiable with rewards!To solve (i), we use the fact that in the cell-probe model a polylogarithmic-update-time algo-rithm can be modeled by a polylogarithmic-depth decision assignment tree [Mil99], which naturallyleads to a complete problem about a decision tree (we leave details here; see Section 5 and Sec-tion 10 for more). It turns out that we can reduce from this problem to DNF dy (Definition 2.7);the intuition being that each bit in the main memory corresponds to a boolean variable and eachroot-to-leaf path in the decision assignment tree can be thought of as a DNF clause. The onlydownside of this approach is that a polylogarithmic-depth decision tree has quasi-polynomial size.A straightforward reduction would cause quasi-polynomial space in the cell-probe model. By ex-ploiting the special property of DNF dy and the fact that the cell-probe model only counts thememory access, we can avoid this space blowup by “hardwiring” some space usage into the decisiontree and reconstruct some memory when needed.The fact that the DNF dy problem is NP dy -complete (almost) immediately implies that manywell-known dynamic problems are NP dy -hard. To explain why this is the case, we first recall thedefinition of the dynamic sparse orthogonal vector (OV dy ) problem. Definition 2.9 (OV dy ) . Initially, we have to preprocess a collection of m vectors V = { v , . . . , v m } where each v j ∈ { , } n , and another vector u ∈ { , } n . It is guaranteed that each v j ∈ { , } n has at most O (polylog( m )) many nonzero entries. Each update flips the value of one entry in thevector u . After each update, we have to answer if there is a vector v ∈ V that is orthogonal to u (i.e., if u T v = 0). △ The key observation is that the OV dy problem is equivalent to the DNF dy problem, in thesense that OV dy ∈ P dy iff DNF dy ∈ P dy . The proof is relatively straightforward (the vectors v j u respectively correspond to the clauses and the variables in DNF dy ),and we defer it to Appendix C. In [AW14], Abboud and Williams show SETH-hardness for all ofthe problems in Table 3. In fact, they actually show a reduction from OV dy to these problems.Therefore, we immediately obtain the following result. Corollary 2.10.
All problems in Table 3 are NP dy -hard. By introducing the notion of oracles (see Section 6.2), it is not hard to extend the class NP dy intoa polynomial-hierarchy for dynamic problems, denoted by PH dy . Roughly, PH dy is the union ofclasses Σ dyi and Π dyi , where (i) Σ dy = NP dy , Π dy = coNP dy , and (ii) we say that a dynamic problemis in class Σ dyi (resp. Π dyi ) if we can show that it is in NP dy (resp. coNP dy ) assuming that there areefficient dynamic algorithms for problems in Σ i − . The details appear in Section 9. Example 2.11 ( k - and (< k )-edge connectivity) . In the dynamic k -edge connectivity problem, an n -node graph G = ( V, E ) and a parameter k = O (polylog( n )) is given at the time of preprocessing.Each update is an edge insertion or deletion in G . We want an algorithm to output YES if and onlyif G has connectivity at least k , i.e. removing at most k − not disconnect G . We claimthat this problem is in Π dy . To avoid dealing with coNP dy , we consider the complement of thisproblem called dynamic (< k )-edge connectivity , where x = Y ES if and only if G has connectivityless than k . We show that (< k )-edge connectivity is in Σ dy .We already argued in Example 2.6 that dynamic connectivity is in NP dy = Σ dy . Assuming thatthere exists an efficient (i.e. polylogarithmic-update-time) algorithm A for dynamic connectivity,we will show that (< k )-edge connectivity is in NP dy . Consider the following verifier V . After everyupdate in G , the verifier V reads a proof that is supposed to be a set S ⊆ E of at most k − V then sends the update to A and also tells A to delete the edges in S from G . If A says that G is not connected at this point, then the verifier V outputs x = Y ES with reward y = 1; otherwise,the verifier V outputs x = N O with reward y = 0. Finally, V tells A to add the edges in S back in G . Observe that if G has connectivity less than k and the verifier always receives a proof thatmaximizes the reward, then the proof will be a set of edges disconnecting the graph and V will answerYES. Otherwise, no proof can make V answer YES. Thus the dynamic (< k )-edge connectivityproblem is in NP dy if A exists. In other words, the problem is in Σ dy . △ By arguments similar to the above example, we can show that other problems such as Chan’ssubset union and small diameter are in PH dy ; see Table 4 and Appendix B.3 for more.The theorem that plays an important role in our main conclusion (Theorem 2.1) is the following. Theorem 2.12. If P dy = NP dy , then PH dy = P dy . To get an idea how to proof the above theorem, observe that if P dy = NP dy , then A in Ex-ample Example 2.11 exists and thus dynamic (< k )-edge connectivity are in Σ dy = NP dy by theargument in Example 2.6; consequently, it is in P dy ! This type of argument can be extended to allother problems in PH dy . In previous subsections, we have stated two complexity results, namely NP dy -completeness/hardnessand the collapse of PH dy when P dy = NP dy . With right definitions in place, it is not a surprise thatmore can be proved. For example, we obtain the following results:10. If NP dy ⊆ coNP dy , then PH dy = NP dy ∩ coNP dy .2. If NP dy ⊆ AM dy ∩ coAM dy , then PH dy ⊆ AM dy ∩ coAM dy .Here, coNP dy , AM dy , and coAM dy are analogues of complexity classes coNP, AM, and coAM.See Theorem 9.6 and Proposition 12.5 for more details.While the coarse-grained complexity results in this paper are mostly resource-centric (in contrastto fine-grained complexity results that are usually centered around problems), we also show thatthis approach is helpful for understanding the complexity of specific problems as well, in the formof non-reducibility . In particular, the following results are shown in Part V:1. Assuming PH dy = NP dy ∩ coNP dy , the two statements cannot hold at the same time.(a) Connectivity is in coNP dy . (This would be the case if it is in P dy .)(b) One of the following problems is NP dy -hard: approximate minimum spanning forest(MSF), d -weight MSF,bipartiteness, and k -edge connectivity.2. k -edge connectivity is in AM dy ∩ coAM dy . Consequently, assuming PH dy AM dy ∩ coAM dy , k -edge connectivity cannot be NP dy -hard.Note that both PH = NP ∩ coNP and PH AM ∩ coAM are widely believed in the static settingsince refuting them means collapsing PH . While we can show that PH dy would also collapse if PH dy = NP dy ∩ coNP dy , it remains open whether this is the case for PH dy ⊆ AM dy ∩ coAM dy ; inparticular is PH dy ⊇ AM dy ∩ coAM dy ?When a problem Y cannot be NP dy -hard, there is no efficient reduction from an NP dy -hardproblem X to Y , where an efficient reduction is roughly to a way to handle each update for problem X by making polylogarithmic number of updates to an algorithm for Y (such reduction would make Y an NP dy -hard problem). Consequently, this rules out efficient reductions from dynamic OV, sinceit is NP dy -complete. As a result, this rules out a common way to prove lower bounds based onSETH, since previously this was mostly done via reductions from dynamic OV [AW14]. (A lowerbound for dynamic diameter is among a very few exception [AW14].) As noted earlier, it turns out that the dynamic OV problem is NP dy -complete. Since most previousreductions from SETH to dynamic problems (in the word RAM model) are in fact reductions fromdynamic OV [AW14], and since any reduction in the word RAM model applies also in the (stronger)cell-probe model, we get many NP dy -hardness results for free. In contrast, our results above implythat the following two statements are equivalent: (i) “problem Π cannot be NP dy -hard” and (ii)“there is no efficient reduction from dynamic OV to Π”, where “efficient reductions” are reductionthat only polynomially blow up the instance size (all reductions in [AW14] are efficient). In otherwords, we may not expect reductions from SETH that are similar to the previous ones for k -edgeconnectivity, bipartiteness, etc.Finally, we emphasize that the coarse-grained approach should be viewed as a complement ofthe fine-grained approach, as the above results exemplify. We do not expect to replace results fromone approach by those from another. 11 .6 Complexity classes for dynamic problems in the word RAM model As an aside, we managed to define complexity classes and completeness results for dynamic problemsin the word RAM model as well. We refer to P dy and NP dy as RAM − P dy and RAM − NP dy in theword-RAM model. One caveat is that for technical reasons we need to allow for quasipolynomialpreprocessing time and space while defining the complexity classes RAM − P dy and RAM − NP dy .See Part VI for more details. There are several previous attempts to classify dynamic problems. First, there is a line of workscalled “dynamic complexity theory” (see e.g. [DKM +
15, WS05, SZ16]) where the general questionasks whether a dynamic problem is in the class called DynFO. Roughly speaking, a problem is inDynFO if it admits a dynamic algorithm expressible by a first-order logic. This means, in particular,that given an update, such algorithm runs in O (1) parallel time, but might take arbitrary poly( n )works when the input size is n . A notion of reduction is defined and complete problems of DynFOand related classes are proven in [HI02, WS05]. However, as the total work of algorithms from thisfield can be large (or even larger than computing from scratch using sequential algorithms), theydo not give fast dynamic algorithms in our sequential setting. Therefore, this setting is somewhatirrelevant to our setting.Second, a problem called the circuit evaluation problem has been shown to be complete in thefollowing sense. First, it is in P (the class of static problems). Second, if the dynamic version ofcircuit evaluation problem, which is defined as DNF dy where a DNF-formula is replaced with anarbitrary circuit, admits a dynamic algorithm with polylogarithmic update time, then for any staticproblem L ∈ P , a dynamic version of L also admits a dynamic algorithm with polylogarithmicupdate time. This idea is first sketched informally since 1987 by Reif [Rei87]. Miltersen et al.[MSVT94] then formalized this idea and showed that other P -complete problems listed in [MSS90,GHR91] also are complete in the above sense. The drawback about this completeness result isthat the dynamic circuit evaluation problem is extremely difficult. Similar to the case for staticproblems that reductions from
EXP -complete problems to problems in NP are unlikely, reductionsfrom the dynamic circuit evaluation problem to other natural dynamic problems studied in thefield seem unlikely. Hence, this does not give a framework for proving hardness for other dynamicproblems.Our result can be viewed as a more fine-grained completeness result than the above. As weshow that a very special case of the dynamic circuit evaluation problem which is DNF dy is alreadya complete problem. An important point is that DNF dy is simple enough that reductions to othernatural dynamic problems are possible.Finally, Ramalingam and Reps [RR96] classify dynamic problems according to some measure, but did not give any reduction and completeness result. One byproduct of our paper is a way to prove non-reducibility. It is interesting to use this methodto shed more light on the hardness of other dynamic problems. To do so, it suffices to show that But they also show that this is not true for all P -complete problems. They measure the complexity dynamic algorithms by comparing the update time with the size of change in inputand output instead of the size of input itself. AM dy ∩ coAM dy (or, even better, in NP dy ∩ coNP dy ). One particular problemis whether connectivity is in NP dy ∩ coNP dy . It is known to be in AM dy ∩ coAM dy due to therandomized algorithm of Kapron et al [KKM13]. It is also in NP dy (see Example 2.6). The mainquestion is whether it is in coNP dy . (Techniques from [NSW17, Wul17, NS17] almost give this,with verification time n o (1) instead of polylogarithmic.) Having connectivity in NP dy ∩ coNP dy would be a strong evidence that it is in P dy , meaning that it admits a deterministic algorithm withpolylogarithmic update time. Achieving such algorithm will be a major breakthrough. Anotherspecific question is whether the promised version of the (2 − ǫ ) approximate matching problem isin AM dy ∩ coAM dy . This would rule out efficient reductions from dynamic OV to this problem.Whether this problem admits a randomized algorithm with polylogarithmic update time is a majoropen problem. Other problems that can be studied in this direction include approximate minimumspanning forest (MSF), d -weight MSF,bipartiteness, dynamic set cover, dominating set, and st -cut.It is also very interesting to rule out efficient reductions from the following variant of the OuMvconjecture: At the preprocessing, we are given a boolean n × n matrix M and boolean n -dimensionalrow and column vectors u and v . Each update changes one entry in either u or v . We then have tooutput the value of uM v . Most lower bounds that are hard under the OMv conjecture [HKNS15]are via efficient reductions from this problem. It is interesting to rule out such efficient reductionssince SETH and OMv are two conjectures that imply most lower bounds for dynamic problems.Now that we can prove completeness and relate some basic complexity classes of dynamicproblems, one big direction to explore is whether more results from coarse-grained complexity forstatic problems can be reconstructed for dynamic problems. Below are a few samples.1. Derandomization:
Making dynamic algorithms deterministic is an important issue. Deran-domization efforts have so far focused on specific problems (e.g. [NS17, NSW17, BC16, BC17,Ber17, BHI18, BCHN18, BHN16]). Studying this issue via the class
BPP dy might lead us tothe more general understanding. For example, the Sipser-Lautermann theorem [Sip83, Lau83]states that
BP P ⊆ Σ ∩ Π , Yao [Yao82] showed that the existence of some pseudorandomgenerators would imply that P = BP P , and Impagliazzo and Wigderson [IW97] suggestedthat
BP P = P (assuming that any problem in E = DT IM E (2 O ( n ) ) has circuit complexity2 Ω( n ) ). We do not know anything similar to these for dynamic problems.2. NP-Intermediate:
Many static problems (e.g. graph isomorphism and factoring) are consid-ered good candidates for being NP-intermediate, i.e. being neither in P nor NP-complete.This paper leaves many natural problems in NP dy unproven to be NP dy -complete. Are theseproblems in fact NP dy -intermediate? The first step towards this question might be provingan analogues of Ladner’s theorem [Lad75], i.e. that an NP dy -intermediate dynamic problemexists, assuming P dy = NP dy . It is also interesting to prove the analogue of the time-hierarchytheorems , i.e. that with more time, more dynamic problems can be solved. (Both theoremsare proved by diagonalization in the static setting.)3. This work and lower bounds from fine-grained complexity has focused mostly on decision problems. There are also search dynamic problems , which always have valid solutions, andthe challenge is how to maintain them. These problems include maximal matching, maxi-mal independent set, minimal dominating set, coloring vertices with (∆ + 1) or more colors,and coloring edges with (1 + ǫ )∆ or more colors, where ∆ is the maximum degree (e.g.[BGS15, BCHN18, AOSS18, SW18, DZ18, GK18, OSSW18]). These problems do not seemto correspond to any decision problems. Can we define complexity classes for these problemsand argue that some of them might not admit polylogarithmic update time? Analogues ofTFNP and its subclasses (e.g. PPAD) might be helpful here.13 ynamicProblems Preprocess Update Queries Ref.Numbers Sum/max a set S ofnumbers insert/delete anumber in S return P x ∈ S x or max x ∈ S x Binary searchtreesPredecessor given x , return the maximum y ∈ S where y ≤ x . Geometry S of pointson a plane insert/delete apoint in S given [ x , x ] × [ y , y ], return | S ∩ ([ x , x ] × [ y , y ]) | [Ove87, afterTheorem 7.6.3]Incrementalplanar nearestneighbor insert a point to S given a point q , return p ∈ S which isclosest to q [Ove87, Theorem7.3.4.1]Vertical rayshooting a set S ofsegments on aplane insert/delete asegment in S given a point q , return the segmentimmediately above q [CN15, Theorem3.7] Graphs
Dynamicproblems onforests a forest F insert/delete anedge in F s.t. F remains a forest given two nodes u and v , decide if u and v are connected in F [ST81, HK99,AHdLT05]given a node u , return the size of thetree containing u many more kindsof updates many more kinds of queryConnectivity onplane graphs a plane graph G (i.e. a planargraph on a fixedembedding) insert/delete anedge in G suchthat G has nocrossing on theembedding given two nodes u and v , decide if u and v are connected in G [Fre85, EIT + u and v , decide if u and v are 2-edge connected in G [Fre97](2 + ǫ )-approx.size of maximummatching a general graph G insert/delete anedge in G decide whether the size of maximummatching is at most k or at least(2 + ǫ ) k for some k and constant ǫ > Table 1: Problems in P dy . Some problems are strictly promise problems, but our class can beextended easily to include them (see Appendix A).There are also other concepts that have not been discussed in this paper at all, such as interactiveproofs, probabilistically checkable proofs (PCP), counting problems (e.g. Toda’s theorem), rela-tivization and other barriers. Finally, in this paper we did not discuss amortized update time. Itis a major open problem whether similar results, especially an analogue of NP-hardness, can beproved for algorithms with amortized update time. NP dy -Completeness Proof In this section, we present an overview of one of our main technical contributions (the proof of The-orem 2.8) at a finer level of granularity. In order to explain the main technical insights we focus ona nonuniform model of computation called the bit-probe model, which has been studied since the1970’s [Fre78, Mil99]. P dy and NP dy We begin by reviewing (informally) the concepts of a dynamic problem and an algorithm in the bit-probe model. See Section 6 for a formal description. Consider any dynamic problem D n . Here, thesubscript n serves as a reminder that the bit-probe model is nonuniform and it also indicates thateach instance I of this problem can be specified using n bits. We will will mostly be concerned with14 ynamicProblems Preprocess Update Queries Ref. Connectivity a generalundirectedunweightedgraph G insert/delete anedge in G given two nodes u and v , decide if u and v are connected in G Proposition B.3(1 + ǫ )-approx.size of maximummatching decide whether the size of maximummatching is at most k or at least(1 + ǫ ) k for some k and constant ǫ > G and H where | V ( H ) | =polylog( | V ( G ) | ) insert/delete anedge in G decide whether H is a subgraph of G Proposition B.5 uMv (entryupdate) u, v ∈ { , } n and M ∈ { , } n × n update an entryof u or v decide whether u T Mv = 1(multiplication over Boolean semi-ring).3SUM a set S ofnumbers insert/delete anumber in S decide whether there is a, b, c ∈ S where a + b = c Planar nearestneighbor a set S of pointson a plane insert a point to S given a point q , return p ∈ S which isclosest to q Erikson’sproblem [Pat10] a matrix M choose a row or acolumn andincrement allnumber of suchrow or column given k , is the maximum entry in M atleast k ?Langerman’sproblem [Pat10] an array A given ( i, x ), set A [ i ] = x is there a k such that P ki =1 A [ i ] = 0? Table 2: Problems in NP dy that are not known to be in P dy . Some problems are strictly promiseproblems, but our class can be extended easily to include them (see Appendix A). Dynamic Problems Preprocess Update Queries
Pagh’s problem withemptiness query[AW14] A collection X of sets X , . . . , X k ⊆ [ n ] given i, j , insert X i ∩ X j into X given i , is X i = ∅ ?Chan’s subset unionproblem [AW14] A collection of sets X , . . . , X n ⊆ [ m ]. Aset S ⊆ [ n ]. insert/deletion anelement in S is ∪ i ∈ S X i = [ m ]?Single sourcereachability Count( s -reach) a directed graph G anda node s insert/delete an edge count the nodes reachable from s .2 Strong components(SC2) a directed graph G insert/delete an edge are there more than 2 stronglyconnected components? st -max-flow a capacitated directedgraph G and nodes s and t insert/delete an edge the size of s - t max flow.Subgraph globalconnectivity a fixed undirectedgraph G turn on/off a node is a graph induced by turned on nodesconnected?3 vs. 4 diameter an undirected graph G insert/delete an edge is a diameter of G ST -reachability a directed graph G andsets of node S and T insert/delete an edge is there s ∈ S and t ∈ T where s canreach t ? Table 3: Problems that are NP dy -hard. The proof is discussed in Section 11.3.15 ynamicProblems Preprocess Update Queries Ref. Smalldominating set a graph G insert/deletean edge Is there a dominating set of size at most k ? Lemma B.6Small vertexcover a graph G insert/deletean edge Is there a vertex cover of size at most k ? Lemma B.7Small maximalindependent set a graph G insert/deletean edge Is there a maximal independent set ofsize at most k ? Lemma B.8Small maximalmatching a graph G insert/deletean edge Is there a maximal matching of size atmost k ? Lemma B.9Chan’s SubsetUnion Problem a collection ofsets X , . . . , X n from universe[ m ], and a set S ⊆ [ n ] insert/deletean index in S is ∪ i ∈ S X i = [ m ]? Lemma B.103 vs. 4 diameter a graph G insert/deletean edge Is the diameter of G k -center a point set X ⊆ R d and athreshold T ∈ R insert/delete apoint Is there a set C ⊆ X where | C | ≤ k andmax u ∈ X min v ∈ C d ( u, v ) ≤ T Lemma B.12 k -edgeconnectivity a graph G insert/deletean edge Is G k -edge connected? Lemma B.13
Table 4: Problems in PH dy that are not known to be in NP dy . The parameter k in every problemmust be at most polylog( n ) where n is the size of the instance.dynamic decision problems, where the answer D n ( I ) ∈ { , } to every instance I can be specifiedusing a single bit. We say that I is an YES instance if D n ( I ) = 1, and a NO instance if D n ( I ) = 0.An algorithm A n for this dynamic problem D n has access to a memory mem n , and the total numberof bits available in this memory is called the space complexity of A n . The algorithm A n works insteps t = 0 , , . . . , in the following manner. Preprocessing:
At step t = 0 (also called the preprocessing step), the algorithm gets a startinginstance I ∈ D n as input. Upon receiving this input, it initializes the bits in its memory mem n andthen it outputs the answer D n ( I ) to the current instance I . Updates:
Subsequently, at each step t ≥
1, the algorithm gets an instance-update ( I t − , I t ) as input.The sole purpose of this instance-update is to change the current instance from I t − to I t . Uponreceiving this input, the algorithm probes (reads/writes) some bits in the memory mem n , and thenoutputs the answer D n ( I t ) to the current instance I t ∈ D n . The update time of A n is the maximumnumber of bit-probes it needs to make in mem n while handling an instance-update.One way to visualize the above description as follows. An adversary keeps constructing an instance-sequence ( I , I , . . . , I k , . . . ) one step at a time. At each step t , the algorithm A n gets thecorresponding instance-update ( I t − , I t ), and at this point it is only aware of the prefix ( I , . . . , I t ).Specifically, the algorithm does not know the future instance-updates. After receiving the instance-update at each step t , the algorithm has to output the answer to the current instance D n ( I t ). Thisframework is flexible enough to capture dynamic problems that allow for both update and query operations, because we can easily model a query operation as an instance-update. This fact is illus-trated in more details in Example 6.5. Furthermore, w.l.o.g. we assume that an instance-update ina dynamic problem D n can be specified using O (log n ) bits. See the discussion preceding Assump-tion 6.7 for a more detailed explanation of this phenomenon.For technical reasons, we will work under the following assumption. This assumption will be16mplicitly present in the definitions of the complexity classes P dy and NP dy below. Assumption 5.1.
A dynamic algorithm A n for a dynamic problem D n has to handle at most poly( n ) many instance-updates. We now define the complexity class P dy . Definition 5.2 (Class P dy ) . A dynamic decision problem D n is in P dy iff there is an algorithm A n solving D n which has update time O (polylog( n )) and space-complexity O (poly( n )). △ In order to define the class NP dy , we first introduce the notion of a verifier in Definition 5.3. Sub-sequently, we introduce the class NP dy in Definition 5.4. We have already discussed the intuitionsbehind these concepts in Section 1 after the statement of Definition 2.4. Definition 5.3 (Dynamic verifier) . We say that a dynamic algorithm V n with space-complexity O (poly( n )) is a verifier for a dynamic decision problem D n iff it works as follows. Preprocessing:
At step t = 0, the algorithm V n gets a starting instance I ∈ D n as input, and itoutputs an ordered pair ( x , y ) where x ∈ { , } and y ∈ { , } polylog( n ) . Updates:
Subsequently, at each step t ≥
1, the algorithm V n gets an instance-update ( I t , I t − ) anda proof π t ∈ { , } polylog( n ) as input, and it outputs an ordered pair ( x t , y t ) where x t ∈ { , } and y t ∈ { , } polylog( n ) . The algorithm V n has O (polylog( n )) update time, i.e., it makes at most O (polylog( n )) bit-probes in the memory during each step t . Note that the output ( x t , y t ) dependson the instance-sequence ( I , . . . , I t ) and the proof-sequence ( π , . . . , π t ) seen so far. △ Definition 5.4 (Class NP dy ) . A decision problem D n is in NP dy iff it admits a verifier V n whichsatisfy the following properties. Fix any instance-sequence ( I , . . . , I k ). Suppose that the verifier V n gets I as input at step t = 0 and the ordered pair (( I t − , I t ) , π t ) as input at every step t ≥ π , . . . , π k ), we have x t = 0 for each t ∈ { , . . . , k } where D n ( I t ) = 0.2. If the proof-sequence ( π , . . . , π k ) is reward-maximizing (defined below), then we have x t = 1for each t ∈ { , . . . , k } with D n ( I t ) = 1,The proof-sequence ( π , . . . , π k ) is reward-maximizing iff the following holds. At each step t ≥ I , . . . , I t ) and ( π , . . . , π t − ), the proof π t is chosen in such a way thatmaximizes the value of y t . We say that such a proof π t is reward-maximizing . △ Just as in the static setting, we can easily prove that P dy ⊆ NP dy and we conjecture that P dy = NP dy . The big question left open in this paper is to resolve this conjecture. Corollary 5.5.
We have P dy ⊆ NP dy . NP dy One of the main results in this paper shows that a natural problem called dynamic narrow DNFevaluation (denoted by DNF dy ) is NP dy -complete. Intuitively, this means that (a) DNF dy ∈ NP dy ,and (b) if DNF dy ∈ P dy then P dy = NP dy . We now give an informal description of this problem. To be more precise, condition (b) means that every problem in P dy is P dy -reducible to DNF dy . See Section 8 fora formal definition. ynamic narrow DNF evaluation (DNF dy ): An instance I of this problem consists of a triple( Z , C , φ ), where Z = { z , . . . , z N } is a set of N variables, C = { C , . . . , C M } is a set of M DNFclauses, and φ : Z → { , } is an assignment of values to the variables. Each clause C j is aconjunction (AND) of at most polylog( N ) literals, where each literal is of the form z i or ¬ z i forsome variable z i ∈ Z . This is an YES instance if at least one clause C ∈ C is true under theassignment φ , and this is a NO instance if every clause in C is false under the assignment φ . Finally,an instance-update changes the assignment φ by flipping the value of exactly one variable in Z .It is easy to see that the above problem is in NP dy . Specifically, if the current instance is an YESinstance, then a proof π t simply points to a specific clause C j ∈ C that is true under the currentassignment φ . The proof π t can be encoded using O (log M ) bits. Furthermore, since each clausecontains at most polylog( N ) literals, the verifier can check that the clause C j specified by the proof π t is true under the assignment φ in O (polylog( N )) time. On the other hand, no proof can fool theverifier if the current instance is a NO instance (where every clause is false). All these observationscan be formalized in a manner consistent with Definition 5.4. We will prove the following theorem. Theorem 5.6.
The DNF dy problem described above is NP dy -complete. In order to prove Theorem 5.6, we consider an intermediate dynamic problem called First-DNF dy . First-DNF dy : An instance I of First-DNF dy consists of a tuple ( Z , C , φ, ≺ ). Here, the symbols Z , C and φ denote exactly the same objects as in the DNF dy problem described above. In addition,the symbol ≺ denotes a total order on the set of clauses C . The answer to this instance I is definedas follows. If every clause in C is false under the current assignment φ , then the answer to I is 0.Otherwise, the answer to I is the first clause C j ∈ C according to the total order ≺ that is true under φ . It follows that First-DNF dy is not a decision problem. Finally, as before, an instance-update forthe First-DNF dy changes the assignment φ by flipping the value of exactly one variable in Z .We prove Theorem 5.6 as follows. (1) We first show that First-DNF dy is NP dy -hard. Specifically,if there is an algorithm for First-DNF dy with polylog update time and polynomial space complex-ity, then P dy = NP dy . We explain this in more details in Section 5.2.1. (2) Using a standardbinary-search trick, we show that there exists an O (polylog( n )) time reduction from First-DNF dy to DNF dy . Specifically, this means that if DNF dy ∈ P dy , then we can use an algorithm for DNF dy as a subroutine to design an algorithm for First-DNF dy with polylog update time and polynomialspace complexity. Theorem 5.6 follows from (1) and (2), and the observation that DNF dy ∈ NP dy . NP dy -hardness of First-DNF dy Consider any dynamic decision problem D n ∈ NP dy . Thus, there exists a verifier V n for D n withthe properties mentioned in Definition 5.4. Throughout Section 5.2.1, we assume that there is analgorithm for First-DNF dy with polynomial space complexity and polylog update time. Under thisassumption, we will show that there exists an algorithm A n for D n that also has O (poly( n )) spacecomplexity and O (polylog( n )) update time. This will imply the NP dy -hardness of First-DNF dy . The high-level strategy:
The algorithm A n will use the following two subroutines : (1) Theverifier V n for D n as specified in Definition 5.3 and Definition 5.4, and (2) A dynamic algorithm A ∗ that solves the First-DNF dy problem with polylog update time and polynomial space complexity.To be more specific, consider any instance-sequence ( I , . . . , I k ) for the problem D n . At step t = 0, after receiving the starting instance I , the algorithm A n calls the subroutine V n with thesame input I . The subroutine V n returns an ordered pair ( x , y ). At this point, the algorithm A n outputs the bit x . Subsequently, at each step t ≥
1, the algorithm A n receives the instance-update( I t − , I t ) as input. It then calls the subroutine A ∗ in such a manner which ensures that A ∗ returns18 reward-maximizing proof π t for the verifier V n (see Definition 5.4). This is explained in moredetails below. The algorithm A n then calls the verifier V n with the input (( I t − , I t ) , π t ), and theverifier returns an ordered pair ( x t , y t ). At this point, the algorithm A n outputs the bit x t .To summarize, the algorithm A n uses A ∗ as a dynamic subroutine to construct a reward-maximizing proof-sequence ( π , . . . , π k ) – one step at a time. Furthermore, after each step t ≥ A n calls the verifier V n with the input (( I t − , I t ) , π t ). The verifier V n returns ( x t , y t ),and the algorithm A n outputs x t . Item (1) in Definition 5.4 implies that the algorithm A n outputs0 on all the NO instances (where D n ( I t ) = 0). Since the proof-sequence ( π , . . . , π k ) is reward-maximizing, item (2) in Definition 5.4 implies that the algorithm A n outputs 1 on all the YESinstances (where D n ( I t ) = 1). So the algorithm A n always outputs the correct answer and solvesthe problem D n . We now explain how the algorithm A n calls the subroutine A ∗ , and then analyzethe space complexity and update time of A n . The key observation is that we can represent theverifier V n as a collection of decision trees, and each root-to-leaf path in each of these trees can bemodeled as a DNF clause. The decision trees that define the verifier V n : Let mem V n denote the memory of the verifier V n .We assume that during each step t ≥
1, the instance-update ( I t − , I t ) is written in a designatedregion mem (0) V n ⊆ mem V n of the memory, and the proof π t is written in another designated region mem (1) V n ⊆ mem V n of the memory. Each bit in mem V n can be thought of as a boolean variable z ∈ { , } .We view the region mem V n \ mem (1) V n as a collection of boolean variables Z = { z , . . . , z N } and thecontents of mem V n \ mem (1) V n as an assignment φ : Z → { , } . For example, if φ ( z j ) = 1 for some z j ∈ Z , then it means that the bit z j in mem V n \ mem (1) V n is currently set to 1. Upon receiving an input(( I t − , I t ) , π t ), the verifier V n makes some probes in mem V n \ mem (1) V n according to some pre-definedprocedure, and then outputs an answer ( x t , y t ). This procedure can be modeled as a decision tree T π t . Each internal node (including the root) in this decision tree is either a "read" node or a "write"node. Each read-node has two children and is labelled with a variable z ∈ Z . Each write-node hasone child and is labelled with an ordered pair ( z, λ ), where z ∈ Z and λ ∈ { , } . Finally, each leaf-node of T π t is labelled with an ordered pair ( x, y ), where x ∈ { , } and y ∈ { , } polylog( n ) . Uponreceiving the input (( I t − , I t ) , π t ), the verifier V n traverses this decision tree T π t . Specifically, itstarts at the root of T π t , and then inductively applies the following steps until it reaches a leaf-node. • Suppose that it is currently at a read-node of T π t labelled with z ∈ Z . If φ ( z ) = 0 (resp. φ ( z ) = 1), then it goes to the left (resp. right) child of the node. On the other hand, supposethat it is currently at a write-node of T π t which is labelled with ( z, λ ). Then it writes λ inthe memory-bit z (by setting φ ( z ) = λ ) and then moves on to the only child of this node.Finally, when it reaches a leaf-node, the verifier V n outputs the corresponding label ( x, y ). This isthe way the verifier operates when it is called with an input (( I t − , I t ) , π t ). The depth of the decisiontree (the maximum length of any root-to-leaf path) is at most polylog( n ), since as per Definition 5.3the verifier makes at most polylog( n ) many bit-probes in the memory while handling any input.Each possible proof π for the verifier can be specified using polylog( n ) bits. Hence, we get acollection of O (2 polylog( n ) ) many decision trees T = { T π } – one tree T π for each possible input π .This collection of decision trees T completely characterizes the verifier V n . DNF clauses corresponding to a decision tree T π t : Suppose that the proof π is given as partof the input to the verifier during some update step. Consider any root-to-leaf path P in a decisiontree T π . We can naturally associate a DNF clause C P corresponding to this path P . To be morespecific, suppose that the path P traverses a read-node labelled with z ∈ Z and then goes to its19eft (resp. right) child. Then we have a literal ¬ z (resp. z ) in the clause C P that correspondsto this read-node. The clause C P is the conjunction (AND) of these literals, and C P is trueiff the verifier V n traverses the path P when π is the proof given to it as part of the input. Let C = { C P : P is a root-to-leaf path in some tree T π ∈ T } be the collection of all these DNF clauses. Defining a total order ≺ over C : We now define a total order ≺ over C which satisfies thefollowing property: Consider any two root-to-leaf paths P and P ′ in the collection of decision trees T . Let ( x, y ) and ( x ′ , y ′ ) respectively denote the labels associated with the leaf nodes of the paths P and P ′ . If C P ≺ C P ′ , then y ≥ y ′ . Thus, the paths with higher y values appear earlier in ≺ . Finding a reward-maximizing proof:
Suppose that ( I , . . . , I t − ) is the instance-sequence of D n received by A n till now. By induction, suppose that A n has managed to construct a reward-maximizing proof-sequence ( π , . . . , π t − ) till this point, and has fed this as input to the verifier V n (which is used as a subroutine). At the present moment, suppose that A n receives an instance-update ( I t − , I t ) as input. Our goal now is to find a reward-maximizing proof π t at the currentstep t .Consider the tuple ( Z , C , φ, ≺ ) where Z = mem V n \ mem (1) V n is the set of variables, C = { C P : P is a root-to-leaf path in some decision tree T π } is the set of DNF clauses, the assignment φ : Z →{ , } reflects the current contents of the memory-bits in mem V n \ mem (1) V n , and ≺ is the total orderingover C described above. Let C P ′ ∈ C be the answer to this First-DNF dy instance ( Z , C , φ, ≺ ), andsuppose that the path P ′ belongs to the decision tree T π ′ corresponding to the proof π ′ . A moment’sthought will reveal that π t = π ′ is the desired reward-maximizing proof at step t we were lookingfor, because of the following reason. Let ( x ′ , y ′ ) be the label associated with the leaf-node in P ′ .By definition, if the verifier gets the ordered pair (( I t − , I t ) , π ′ ) as input at this point, then it willtraverse the path P ′ in the decision tree T π ′ and return the ordered pair ( x ′ , y ′ ). Furthermore, thepath P ′ comes first according to the total ordering ≺ , among all the paths that can be traversedby the verifier at this point. Hence, the path P ′ is chosen in such a way that maximizes y ′ , andaccordingly, we conclude that y t = y ′ is a reward-maximizing proof at step t . Wrapping up: Handling an instance-update ( I t − , I t ) . To summarize, when the algorithm A n receives an instance-update ( I t − , I t ), it works as follows. It first writes down in the instance-update ( I t − , I t ) in mem (0) V n and accordingly updates the assignment φ : Z → { , } . It then callsthe subroutine A ∗ on the First-DNF dy instance ( Z , C , φ, ≺ ). The subroutine A ∗ returns a reward-maximizing proof π t . The algorithm A n then calls the verifier V n as a subroutine with the orderedpair (( I t − , I t ) , π t ) as input. The verifier updates at most polylog( n ) many bits in mem V n and returnsan ordered pair ( x t , y t ). The algorithm A n now updates the assignment φ : Z → { , } to ensurethat it is synchronized with the current contents of mem V n . This requires O (polylog( n )) many callsto the subroutine A ∗ for the First-DNF dy instance. Finally, A n outputs the bit x t ∈ { , } . Bounding the update time of A n : Notice that after each instance-update ( I t − , I t ), the algo-rithm A n makes one call to the verifier V n and at most polylog( n ) many calls to A ∗ . By Defini-tion 5.3, the call to V n requires O (polylog( n )) time. Furthermore, we have assumed that A ∗ haspolylog update time. Hence, each call to A ∗ takes O (polylog( N, M )) = O (polylog(2 polylog( n ) )) = O (polylog( n )) time. Since the algorithm A n makes at most polylog( n ) many calls to A ∗ , the to-tal time spent in these calls is still O (polylog( n )). Thus, we conclude that A n has O (polylog( n ))update time. Bounding the space complexity of A n : The space complexity of A n is dominated by thespace complexities of the subroutines V n and A ∗ . As per Definition 5.3, the verifier V n has space W.l.o.g. we can assume that no two read nodes on the same path are labelled with the same variable. O (poly( n )).We next bound the memory space used by the subroutine A ∗ . Note that in the First-DNF dy instance, we have a DNF clause C P ∈ C for every root-to-leaf path P of every decision tree T π . Sincea proof π consists of polylog( n ) bits, there are at most O (2 polylog( n ) ) many decision trees of the form T π . Furthermore, since every root-to-leaf path is of length at most polylog( n ), each decision tree T π has at most O (2 polylog( n ) ) many root-to-leaf paths. These two observations together imply thatthe set of clauses C is of size at most O (cid:16) polylog( n ) · polylog( n ) (cid:17) = O (2 polylog( n ) ). Furthermore, asper Definition 5.3 there are at most O (poly( n )) many bits in the memory mem V n , which means thatthere are at most O (poly( n )) many variables in Z . Thus, the First-DNF dy instance ( Z , C , φ, ≺ ) isdefined over a set of N = poly( n ) variables and a set of M = 2 polylog( n ) clauses (where each clauseconsists of at most polylog( n ) many literals). We have assumed that A ∗ has quasipolynomial spacecomplexity. Thus, the total space needed by the subroutine A ∗ is O (2 polylog( N,M ) ) = O (2 polylog( n ) ).Unfortunately, the bound of 2 polylog( n ) is too large for us. Instead, we will like to have a spacecomplexity of O (poly( n )). Towards this end, we introduce a new subroutine S ∗ n that acts as an interface between the subroutine A ∗ and the memory mem A ∗ used by A ∗ (see Section 10.4 for details).Specifically, as we observed in the preceding paragraph, the memory mem A ∗ consists of 2 polylog( n ) many bits and we cannot afford to store all these bits during the execution of the algorithm A n .The subroutine S ∗ n has the nice properties that (a) it has space complexity O (poly( n )) and (b) itcan still return the content of a given bit in mem A ∗ in O (polylog( n )) time. In other words, thesubroutine S ∗ n stores the contents of mem A ∗ in an implicit manner , and whenever the subroutine A ∗ wants to read/write a given bit in mem A ∗ , it does that by calling the subroutine S ∗ n . Thisensures that the overall space complexity of A ∗ remains O (poly( n )). However, the subroutine S ∗ n will be able perform its designated task with polylog( n ) update time and poly( n ) space complexityonly if the algorithm A n is required to handle at most poly( n ) many instance-updates after thepreprocessing step. This is why we need Assumption 5.1 while defining the complexity classes P dy and NP dy .To summarize, we have shown that the algorithm A n has polylog update time and polynomialspace complexity. This implies that the First-DNF dy problem is NP dy -hard.21 art II Basic Complexity Classes (in the Bit-probeModel)
This part is organized as follows. In Section 6, we formally define the notion of a dynamic problemand a dynamic algorithm in the bit-probe model. In Section 7, we define the dynamic complexityclasses P dy , NP dy and coNP dy . We also present multiple examples of dynamic problems belonging tothese classes. See Appendix B.2 for more details. In Section 8, we formally define the notion of an“efficient” reduction between dynamic problems, and define the notion of a complete problem withinany given dynamic complexity class. Finally, in Section 9 we introduce the dynamic polynomialhierarchy (denoted by PH dy ), and show that PH dy collapses to P dy if P dy = NP dy . We refer thereader to Appendix B.3 for multiple examples of dynamic problems that happen to belong to PH dy (but are not known to be in NP dy or coNP dy ). We work with the bit-probe model of computation. In Section 6.1, we formally define the notionof a dynamic problem in this model. In Section 6.2, we present a formal description of a bit-probealgorithm for solving a dynamic problem.
Static Problems.
We begin by recalling the standard definition of a computational problemin the static setting. A problem is a function P : { , } ∗ → { , } ∗ which maps each instance I ∈ { , } ∗ to an answer P ( I ) ∈ { , } ∗ . We say that P is a decision problem iff the range of P is { , } . If P is a decision problem, then we respectively refer to P − (0) and P − (1) as the set of noinstances and yes instances of P . Example 6.1.
Let P : { , } ∗ → { , } be the problem that, given a planar graph G , decide whether G is Hamiltonian. Then P − (0) and P − (1) are the sets of non-Hamiltonian planar graphs andHamiltonian planar graphs, respectively. △ For any integer n ≥
1, let P n : { , } n → { , } ∗ be obtained by restricting the domain of P tothe set of all bit-strings of length n . We say that P n an n -slice of P , and we write P = {P n } n . Werefer to each bit-string I ∈ { , } n as an instance of P n . Dynamic Problems.
We define a dynamic problem to be a graph structure imposed on instances.Formally, a dynamic problem is an ordered pair D = ( P , G ), where P is a static problem and G = {G n } n is a family of directed graphs such that the node-set of each G n is equal to the set ofall instances of P n . Thus, for each integer n ≥
1, the directed graph G n = ( U n , E n ) has a node-set U n = { , } n . We refer to the ordered pair D n = ( P n , G n ) as the n -slice of D , and we write D = {D n } n . Each I ∈ U n is called an instance of D n . Each ( I, I ′ ) ∈ E n is called an instance-update of D n . We refer to the graph G n as the update-graph of D n . We also call G the family ofupdate-graphs of D or simply the update-graphs of D . We will show the below definition often: We use U n , instead of V n , to denote the set of nodes of G n , since later on V will be used frequently for a “verifier”. efinition 6.2 (Instance-sequence) . We say that a tuple ( I , . . . , I k ) is an instance-sequence of D iff ( I , . . . , I k ) is a directed path in the graph G n for some n . △ For each instance I of D , we write D ( I ) = P ( I ) as the answer of I . We say that D = ( P , G )is a dynamic decision problem iff P is a decision problem. From now, we usually use D to denotesome dynamic problem, and we just call D a problem for brevity.For each integer n ≥
1, consider the function u : E n → { , } ∗ that maps each instance-update ( I, I ′ ) ∈ E n to the bit-string which represents the positions of bits where I ′ differ from I . We call u ( I, I ′ ) the standard encoding of ( I, I ′ ) and write I ′ = I + u ( I, I ′ ). The length ofthis encoding is denoted by | u ( I, I ′ ) | . More generally, for any instance-sequence ( I , . . . , I k ) of D ,we write I k = I + u ( I , I ) + · · · + u ( I k − , I k ). Note that it is quite possible for two differentinstance-update ( I , I ) ∈ E n and ( I , I ) ∈ E n to have the same standard encoding, so that we get u ( I , I ) = u ( I , I ).Let λ D : N → N be an integer valued function such that λ D ( n ) = max ( I,I ′ ) ∈E n | u ( I, I ′ ) | is equalto the maximum length over all standard encoding of instance-updates in G n , for each positiveinteger n ∈ N . We refer to λ D ( · ) as the instance-update-size of D . Fact 6.3.
We have λ D ( n ) ≥ log n if there is some instance-update between instances of D of size n .Proof. The standard encoding u needs at least log n bits to specify a single bit-position where twoinstances of D n differ from one another.Recall that in the static setting, two decision problems P and Q are complements of each otheriff the set of YES instances of P are the set of NO instances of Q , and vice versa. We now definewhen two dynamic decision problems are complements of each other. Definition 6.4.
We say that a dynamic decision problem D = ( P , G ) is the complement of anotherdynamic decision problem D ′ = ( Q , G ′ ) (and vice versa) iff the following two conditions hold: • The corresponding static problems P and Q are complements of each other. Specifically, wehave P − (1) = Q − (0) and P − (0) = Q − (1). • For each n ≥
1, we have G n = G ′ n . In other words, the dynamic problems D and D ′ have thesame update-graphs for each n ≥ △ Our formalization captures many dynamic problems – even the ones that allow for query operations(in addition to update operations).
Example 6.5 (Dynamic problems with queries) . Consider the dynamic connectivity problem. Inthis problem, we are given an undirected graph G with N nodes which is updated via a sequence ofedge insertions/deletions. At any time, given a query ( u, v ), we have to decide whether the nodes u and v are connected in the current graph G . We can capture this problem using our formalization.Set n = N + 2 log N , and define the update-graph G n = ( U n , E n ) as follows. Each instance I ∈ U n = { , } n represents a triple ( G, u, v ) where G is an N -node graph and u, v ∈ [ N ] are twonodes in G . There is an instance-update ( I, I ′ ) ∈ E n iff either { I = ( G, u, v ) and I ′ = ( G, u ′ , v ′ )} or{ I = ( G, u, v ) and I ′ = ( G ′ , u, v ) and G, G ′ differs in exactly one edge}. Intuitively, the former casecorresponds to a query operation, whereas the latter case corresponds to the insertion/deletionof an edge in G . Since an N -node graph can be represented as a string of N bits using an23djacency matrix, a triple ( G, u, v ) can be represented as a string of N + 2 log N = n bits. Let P n : { , } n → { , } be such that ( G, u, v ) ∈ P − n (1) is an yes instance if u and v are connected in G , and ( G, u, v ) ∈ P − n (0) is a no instance otherwise. Let P = {P n } n and G = {G n } n . Then theordered pair D = ( P , G ) captures the dynamic connectivity problem. It is easy to see that D hasan instance-update-size of λ D ( n ) = Θ(log n ). △ Example 6.6 (Partially dynamic problems) . The decremental connectivity problem is the same asthe dynamic connectivity problem, except that the update sequence consists only of edge deletions.Our formalization captures this problem in a similar manner as in Example 6.5. The only differenceis this. For each n ∈ N , there exists an instance-update ( I, I ′ ) ∈ E n iff either { I = ( G, u, v ) and I ′ = ( G, u ′ , v ′ )} or { I = ( G, u, v ) and I ′ = ( G ′ , u, v ) and G ′ is obtained from G by deleting anedge}. △ In the two examples described above, we observed that λ D ( n ) = Θ(log n ). This happens tobe the case with almost all the dynamic problems considered in the literature. For example, in adynamic graph problem an update typically consists of insertion or deletion of an edge in an inputgraph, and it can be specified using O (log N ) bits if the input graph contains N nodes. Accordingly,we will make the following assumption throughout the rest of the paper. Assumption 6.7.
Every dynamic problem D has λ D ( n ) = Θ(log n ) . One of the key ideas in this paper is to work with a nonuniform model of computation called the bit-probe model, which has been studied since the 1970’s by Fredman [Fre78] (see also a survey byMiltersen [Mil99]). This allows us to view a dynamic algorithm as a clean combinatorial object,which turns out to be very useful in deriving our main results (defining complexity classes andshowing the completeness and hardness of specific problems with respect to these classes). An algorithm-family A = {A n } n ≥ is a collection of algorithms. For each n , an algorithm A n operates on an array of bits mem ∈ { , } ∗ called the memory . The memory contains two designatedsub-arrays called the input memory mem in and the output memory mem out . A n works in steps t = 0 , , . . . . At the beginning of any step t , an input in ( t ) ∈ { , } ∗ at step t is written down in mem in , and then A n is called . Once A n is called, A n reads and write mem in a certain way describedbelow. Then A n returns the call. The bit-string stored in mem out is the output at step t . Let in (0 → t ) = ( in (0) , . . . , in ( t )) denote the input transcript up to step t . We denote the output of A n at step t by A n ( in (0 → t )) as it can depend on the whole sequence of inputs it received so far.After A n is called in each step, how A n probes (i.e. reads or writes) the memory mem isdetermined by 1) a preprocessing function prep n : { , } ∗ → { , } ∗ and 2) a decision tree T n (tobe defined soon). At step t = 0 (also called the preprocessing step ), A n initializes the memory bysetting mem ← prep n ( in (0)). We also call in (0) an initial input . At each step t ≥ update step ), A n uses the decision tree T n to operate on mem .A decision tree is a rooted tree with three types of nodes: read nodes , write nodes , and endnodes . Each read node u has two children and is labeled with an index i u . Each write node hasone child and is labeled with a pair ( i u , b u ) where i u is an index and b u ∈ { , } . End nodes aresimply leaves of the tree. For any index i , let mem [ i ] be the i -th bit of mem . To say that T n operateson mem , we means the following: 24 Start from the root of T n . If the current node u is a read node, then proceed to the left-childif mem [ i u ] = 0, otherwise proceed to the right-child. If u is a write node, then set mem [ i u ] ← b u and proceed to u ’s only child. Else, u is a leaf (an end node), then stop.The root-to-leaf path followed by the decision tree while operating on its memory is called the execution path . Clearly, the execution path depends on the contents of the memory bits. Also, notethat the number of probes made by the algorithm during a call at an update step is equal to thelength of the execution path traversed during the call. Thus, the update time of the algorithm A n is defined by the depth (the maximum length of any root to leaf path) of T n . Similarly, the spacecomplexity of the algorithm A n is defined to be the number of bits available in its memory mem .We denote the update time of the algorithm-family A by a function Time A ( n ) where Time A ( n )is the update time of A n . Similarly, the space complexity of the algorithm-family A is denoted bya function Space A ( n ), where Space A ( n ) is the space complexity of A n .From now on, whenever we have to distinguish between multiple different algorithms, we willadd the subscript A n to the notations introduced above (e.g., prep A n , T A n , mem A n , in A n (0)). It is usually too cumbersome to specify an algorithm at the level of its preprocessing function anddecision tree. Hence, throughout this paper, we usually only describe how A n reads and writes thememory at each step, which determines its preprocessing function prep A n and decision tree T A n . Solving problems.
A problem D is solved by an algorithm-family A if, for any n , we have:1. In the preprocessing step, A n is given an initial instance I of size n (i.e. in (0) = I ), and itoutputs A n ( in (0 → D ( I ).2. In each update step t ∈ { , . . . , poly( n ) } , A n is given an instance-update ( I t − , I t ) as input(i.e. in ( t ) = ( I t − , I t )), and it outputs A n ( in (0 → t )) = D ( I t ).We also say that the algorithm A n solves an n -slice D n of the problem D . For each step t , we say I t is an instance maintained by A n at step t . Note that t ∈ { , . . . , poly( n ) } in item (2) above. Thisis emphasized in the assumption below. Assumption 6.8.
When we say that an algorithm A n solves an n -slice D n of a problem D n , wemean that the algorithm gives the correct output for polynomially many update-steps t = 1 , . . . , poly( n ) . Subroutines.
Let A and B be algorithm-families, and consider two algorithms A n ∈ A and B m ∈ B . We say that the algorithm A n uses B m as a subroutine iff the following holds. Thememory mem A n of the algorithm A n contains a designated sub-array mem B m for the subroutine B m to operate on. As in Section 6.2.1, mem B m has two designated sub-arrays for input mem in B m and for mem out B m . A n might read and write anywhere in mem B m . At each step t A n of A n , A n can call B m several times.The term “call” is consistent with how it is used in Section 6.2.1. Let t B m denote the step of B m which is set to zero initially. When A n calls B m with an input x ∈ { , } ∗ then the followingholds. First, A n writes an input in B m ( t B m ) = x at step t B m of B m in mem in B m . Then B m reads andwrite mem B m in according to its preprocessing function prep B m and decision tree T B m . Then B m returns the call with the output B m ( in B m (0 → t B m )) on mem out B m . Finally, the step t B m of B m getincremented: t B m ← t B m + 1.For each call, the update time of B m contributes to the update time of A n . In low-level, wecan see that the preprocessing function prep A n is defined by “composing” prep B m with some otherfunctions, and the decision tree T A n is a decision tree having T B m as sub-trees in several places.25 racles. Suppose that O is an algorithm-family which solves some problem D . We say that thealgorithm A n uses O m as a oracle if A n uses O m as a subroutines just like above, except that thereare the two differences.1. (Black-box access): A n has very limited access to mem O m . A n can call O m as before, butmust write only in mem in O m and can read only from mem out O m . More specifically, suppose that A n call O m when the step of O m is t O m = 0. Then, A n must write in O m (0) = I ′ in mem in O m where I ′ is some instance of the problem D and will be called an instance maintained by O m from then. If the step of O m is t O m ≥
1, then A n must write in O m ( t O m ) = u ( I ′ t O m − , I ′ t O m )in mem in O m where ( I ′ t O m − , I ′ t O m ) is some instance-update of the problem D . After each call, A n can read the output O m ( in O m (0 → t O m ) = D ( I ′ t O m ) which is the answer of the instance I ′ t O m .2. (Free call): The update time of O m does not contribute to the update time of A n . Wemodel this high-level description as follows. We already observed that the decision tree T A n is a decision tree which has T O m as sub-trees in several places. For each occurrence T ′ of T O m in T A n , we assign the weight of edges between two nodes of T ′ to be zero. The update time of A n is the weighted depth of T A n , i.e. the maximum weighted length of any root to leaf path.3. (Space complexity): The memory mem O m is not part of the memory mem A n . In other words,the space complexity of the oracle does not contribute to the space complexity of A . Oracle-families and Blow-up size.
Let A be an an algorithm-family. Let BlowUp A : N → N be a function. We say that A uses an oracle-family O with blow-up size BlowUp A if, for each n , A n uses O m as an oracle where m ≤ BlowUp A ( n ) (or even when A n uses many oracles O m , . . . , O m k where m i ≤ BlowUp ( n ) for all i ). P dy and NP dy We start with an informal description of the complexity class P dy , which is a natural analogue of theclass P in the dynamic setting. First, we recall that in almost all the dynamic problems known in theliterature, an instance-update ( I t − , I t ) can be specified using O (log n ) bits (see Assumption 6.7).Hence, intuitively, a dynamic decision problem D n should be in the class P dy if it admits analgorithm A n whose update time is polynomial in log n (the number of bits needed to representan instance-update). Thus, it is natural to say that a dynamic decision problem is in P dy if itadmits a dynamic algorithm with O (polylog( n )) update time. In addition, for technical reasonsthat will become apparent later on, we need to allow the algorithm to have quasipolynomial spacecomplexity. This is summarized in the definition below. Definition 7.1 (Class P dy ) . A dynamic decision problem D is in P dy iff there is an algorithm-family A for solving D with update time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n )). △ Next, in order to define the class NP dy , we first introduce the notion of a verifier in Definition 7.2.Note that this is almost analogous to the definition of a verifier in the static setting, except thefact that at each step t the verifier V n outputs an ordered pair ( x t , y t ) where x t ∈ { , } and y t ∈ { , } polylog( n ) (instead of outputting a single bit). Intuitively, the bit x t ∈ { , } correspondsto the standard single bit output of a verifier in the static setting, whereas y t – when thought of asa polylog( n ) bit integer – captures the reward obtained by the verifier.26 efinition 7.2 (Verifier-family) . An algorithm-family V is a verifier-family for a dynamic decisionproblem D iff the following holds for each n ≥ • Preprocessing:
At step t = 0, the algorithm V n gets a starting instance I of D n as input, andit outputs an ordered pair ( x , y ) where x ∈ { , } and y ∈ { , } polylog( n ) . • Updates:
Subsequently, at each step t ≥
1, the algorithm V n gets an instance-update ( I t , I t − )of D n and a proof π t ∈ { , } polylog( n ) as input, and it outputs an ordered pair ( x t , y t ) where x t ∈ { , } and y t ∈ { , } polylog( n ) . Note that the output ( x t , y t ) depends on the instance-sequence ( I , . . . , I t ) and the proof-sequence ( π , . . . , π t ) seen so far. △ We now define the complexity class NP dy . Intuitively, a dynamic decision problem D is in NP dy iffit admits a verifier-family V with polylogarithmic update time and polynomial space complexitythat satisfy the following two properties for every n ≥
1. (1) The verifier V n always outputs x t = 0on the NO instances of D n , regardless of the proof-sequence given to it as part of the input. (2)The verifier V n always outputs x t = 1 on the YES instances of D n , provided the proof-sequencegiven to it is reward-maximizing , in the sense that at each step t ≥ π t is chosen in sucha way that maximizes the reward y t (when we think of y t as a polylog( n ) bit integer). Definition 7.3 (Class NP dy ) . A decision problem D is in NP dy iff it admits a verifier-family V with update-time Time V ( n ) = O (polylog( n )) and space-complexity Space V ( n ) = O (poly( n )) whichsatisfy the following properties for each n ≥
1. Fix any instance-sequence ( I , . . . , I k ) of D n .Suppose that V n gets I as input at step t = 0, and (( I t − , I t ) , π t ) as input at every step t ≥ V n outputs ( x t , y t ) at each step t . Then:1. For every proof-sequence ( π , . . . , π k ), we have x t = 0 for each t ∈ { , . . . , k } where D n ( I t ) = 0.2. If the proof-sequence ( π , . . . , π k ) is reward-maximizing (defined below), then we have x t = 1for each t ∈ { , . . . , k } with D n ( I t ) = 1,The proof-sequence ( π , . . . , π k ) is reward-maximizing iff at each step t ≥
1, given the past history( I , . . . , I t ) and ( π , . . . , π t − ), the proof π t is chosen in such a way that maximizes y t (when wethink of y t as a polylog( n ) bit integer). We say that such a proof π t is reward-maximizing . △ We can now define the dynamic complexity class coNP dy in a natural manner. Intuitively, weget the class coNP dy if we switch the phrase “ D n ( t ) = 0” with “ D n ( t ) = 1”, and the phrase “ x t = 0”with “ x t = 1” in Definition 7.3. In other words, a decision problem D is in coNP dy iff its complementdecision problem D ′ (see Definition 6.4) is in NP dy . A more formal definition is given below. Definition 7.4 (Class coNP dy ) . A decision problem D is in NP dy iff it admits a verifier-family V ′ with update-time Time V ′ ( n ) = O (polylog( n )) and space-complexity Space V ′ ( n ) = O (poly( n ))which satisfy the following properties for each n ≥
1. Fix any instance-sequence ( I , . . . , I k ) of D n .Suppose that V ′ n gets I as input at step t = 0, and (( I t − , I t ) , π ′ t ) as input at every step t ≥ V ′ n outputs ( x ′ t , y ′ t ) at each step t . Then:1. For every proof-sequence ( π ′ , . . . , π ′ k ), we have x ′ t = 1 for each t ∈ { , . . . , k } where D n ( I t ) = 1.2. If the proof-sequence ( π ′ , . . . , π ′ k ) is reward-maximizing (defined below), then we have x ′ t = 0for each t ∈ { , . . . , k } with D n ( I t ) = 0, 27he proof-sequence ( π ′ , . . . , π ′ k ) is reward-maximizing iff at each step t ≥
1, given the past history( I , . . . , I t ) and ( π ′ , . . . , π ′ t − ), the proof π ′ t is chosen in such a way that maximizes y ′ t (when wethink of y ′ t as a polylog( n ) bit integer). We say that such a proof π ′ t is reward-maximizing . △ Just as in the static setting, we can easily prove that P dy ⊆ NP dy ∩ coNP dy and we conjecturethat P dy = NP dy . The big question left open in this paper is to resolve this conjecture. Corollary 7.5.
We have P dy ⊆ NP dy ∩ coNP dy .Proof. (sketch) Consider any dynamic decision problem D that belongs to the class P dy . Then itadmits an algorithm-family A with polynomial space complexity and polylogarithmic update time.For each n ≥
1, we can easily modify A n to get a verifier V n for D n which satisfies the conditionsstated in Definition 7.3. Specifically, upon receiving an input (( I t − , I t ) , π t ) at step t ≥
1, theverifier V n ignores the proof π t and simulates the execution of A n on the instance-update ( I t − , I t ).The verifier then outputs the ordered pair ( x t , y t ), where x t is equal to the output of A n and y t is any arbitrary polylog( n ) bit string. This implies that the problem D also belongs to the class NP dy , and hence we get P dy ⊆ NP dy .Using a similar argument, we can also show that P dy ⊆ coNP dy . P dy and NP dy In this section, we classify many dynamic problems that were previously studied in various contextsinto our complexity classes. We will only give a high-level description of each problem . We alsolist a problem which is not decision problem but a correct answer is a number with logarithmicbits (e.g. what is the number a ∈ [ n ]?) as well. This is because there is a corresponding decisionproblem (e.g. is a > k ?). It is easy to see that if a dynamic algorithm for one problem impliesanother algorithm for the corresponding problem with essentially the same update time (up to alogarithmic factor), and vice versa.For P dy , we give a list of some problems in Table 1 which is not at all comprehensive. The onlygoal is to show that there are problems from various contexts that are solvable by fast deterministicdynamic algorithms. For each problem D in Table 1, when an instance can be represented using n bits, it holds that the instance-update size is λ D ( n ) = Θ(log n ). This corroborates Assumption 6.7.Problems in NP dy that are not known to be in P dy are listed in Table 2. The complexityclass NP dy is huge, in the sense that it contains many problems which are not known to be in P dy . It is easy to show that dynamic connectivity on general graphs is NP dy by giving a spanningforest as a proof (see Proposition B.3). For any constant ǫ >
0, dynamic (1 + ǫ )-approximatemaximum matching is also in NP dy by giving a short augmenting path of length O (1 /ǫ ) as a proof(see Proposition B.4). There is also a general way to show that a problem is in NP dy : for anyproblem D whose the yes-instance has a “small certificate”, then D ∈ NP dy . This includes manyproblems such as dynamic subgraph detection, dynamic uM v , dynamic 3SUM, dynamic planarnearest neighbor , Erickson’s problem, and Langerman’s problem (see Proposition B.5). For each n , it should be clear how to describe each problem as the tuple ( P n , G n ) from Section 6.1. Also, if aproblem can handle both updates and queries, then we can formalize both as instance-updates in G n as shown inExample 6.5. The best known algorithm for planar nearest neighbor is by Chan [Cha10] which has polylogarithmic amortized update time. The algorithm is randomized, but it is later derandomized using the result by Chan and Tsakalidis[CT16]. Reductions, Hardness and Completeness
We first define the concept of a P dy -reduction between two dynamic problems. This notion isanalogous to Turing-reductions for static problems. Definition 8.1 ( P dy -reduction) . A dynamic problem D = ( P , G ) is P dy -reducible to another dy-namic problem D ′ iff there is an algorithm-family A that solves D using an oracle-family O forthe problem D ′ , and has update-time Time A ( n ) = O (polylog( n )), space complexity Space A ( n ) = O (poly( n )) and blow-up size BlowUp A ( n ) = O (poly( n )). We write D ≤ D ′ and refer to thealgorithm-family A as a P dy -reduction from D to D ′ . △ The above definition is almost the same as showing that
D ∈ P dy , except that A can use anoracle-family for D ′ .We now show some basic properties of P dy -reductions. In Proposition 8.2 below, item (1) impliesthat if D is reducible to an “easy” problem, then D is “easy” as well. Item (2), on the other hand,shows that the reduction is transitive .The idea behind the proof of item (1) in Proposition 8.2 is straightforward and standard: Giventhat D ≤ D ′ , there is an algorithm-family R solving D efficiently using an oracle-family O for D ′ .Now, if D ′ can be solved efficiently by some algorithm A ′ , then every time R need to call O , weinstead call A ′ as a subroutine, and hence obtain an algorithm A for solving D without calling anoracle and we are done. The proof of items (2) is just an extensions of the same argument. Proposition 8.2.
Suppose that
D ≤ D ′ . Then we have:1. If D ′ ∈ P dy , then D ∈ P dy .2. If D ′ ≤ D ′′ , then D ≤ D ′′ .Proof. Let R be a P dy -reduction from D to D ′ as per Definition 8.1. (1): Suppose that A ′ is a P dy -algorithm-family for D ′ . We will show a P dy -algorithm-family A for D . For each n , A n just simulates R n step by step, except that whenever R n calls an oracle O m with m = BlowUp R ( n ), the algorithm A n calls A ′ m instead. Thus, the update time of A n is givenby: Time A ( n ) ≤ Time R ( n ) × Time A ′ ( m ) = polylog( n ) × polylog( m )= polylog( n ) × polylog( BlowUp R ( n ))= polylog( n ) × polylog(poly( n )) = polylog( n ) . The space complexity of A n is given by:Space A ( n ) = Space R ( n ) + Space A ′ ( m )= poly( n ) + poly( m )= poly( n ) + poly( BlowUp R ( n )) = poly( n ) + poly(poly( n )) = poly( n ) . As both A ′ m and O m solve D ′ m , it must be the case that A n solves D n . Hence, we have: D ∈ P dy . (2): The proof is similar in spirit to the argument used to prove (1). Since D ′ ≤ D ′′ , let A ′ be an algorithm-family that solves D ′ using an oracle-family O ′ for D ′′ . We have Time A ′ ( m ) = O (polylog( m )) and BlowUp A ′ ( m ) = O (poly( m )).29imilar to the proof for item (1), we construct an algorithm-family A for the problem D , whichuses the oracle-family O ′ for D ′′ with following parameters. For each n , let m = BlowUp R ( n )and ℓ = BlowUp A ′ ( m ). The algorithm A n uses O ′ ℓ as an oracle to solve D n with update time O (polylog( n )). Let BlowUp A be the blow-up size of A to O ′ . We infer that: BlowUp A ( n ) = ℓ = BlowUp A ′ ( m )= poly( m )= poly( BlowUp R ( n ))= poly(poly( n ))= poly( n ) . This concludes that A is a P dy -reduction from D to D ′′ and hence D ≤ D ′′ .Next, we define the notions of NP dy -hardness and completeness. Definition 8.3. [Hardness and Completeness]Let D be a dynamic problem.1. We say that D is NP dy -hard iff the following condition holds: If D ∈ P dy then D ′ ∈ P dy forevery problem D ′ ∈ NP dy .2. We say that D is NP dy -complete iff D ∈ NP dy and D is NP dy -hard. △ With our notions of hardness, the following is true.
Corollary 8.4.
Assuming that NP dy = P dy , if a problem D is NP dy -hard, then D / ∈ P dy . Corollary 8.5.
Consider any two dynamic problems D and D ′ such that (a) D ≤ D ′ and (b) D is NP dy -hard. Then D ′ is also NP dy -hard.Proof. Suppose that D ′ ∈ P dy . Then Proposition 8.2 implies that D ∈ P dy . Since D is NP dy -hard,this leads us to the conclusion that if D ′ ∈ P dy then P dy = NP dy . So D ′ is NP dy -hard. In this section, we define a hierarchy of dynamic complexity classes that is analogous to the poly-nomial hierarchy in the static setting. We begin by introducing a useful notation. For any dynamiccomplexity classe C and C , we define the class ( C ) C as follows. Intuitively, a dynamic problem D belongs to the class ( C ) C iff there is an algorithm-family A for D that is allowed have access toan oracle-family O for a problem D ∈ C . We illustrate this by considering the following example. Example 9.1.
Consider any dynamic complexity class C and a dynamic problem D . • We say that the problem D belongs to the class (cid:16) P dy (cid:17) C iff there is an algorithm-family A for D that uses an oracle-family O for some problem D ′ ∈ C with BlowUp A ( n ) = O (poly( n )),and satisfies the conditions stated in Definition 7.1.30 We say that the problem D belongs to the class (cid:16) NP dy (cid:17) C iff there is a verifier family V for D that uses an oracle-family O for some problem D ′ ∈ C with BlowUp V ( n ) = O (poly( n )), andsatisfies the conditions stated in Definition 7.3. • Similarly, we say that the problem D belongs to the class (cid:16) coNP dy (cid:17) C iff there is a verifierfamily V ′ for D that uses an oracle-family O for some problem D ′ ∈ C with BlowUp V ′ ( n ) = O (poly( n )), and satisfies the conditions stated in Definition 7.4. △ We are now ready to introduce the dynamic polynomial hierarchy.
Definition 9.2 (Dynamic polynomial hierarchy) . We first inductively define the complexity classesΣ dyi and Π dyi in the following manner. • For i = 1, we have Σ dy = NP dy and Π dy = coNP dy . • For i >
1, we have Σ dyi = (cid:16) NP dy (cid:17) Σ dyi − and Π dyi = (cid:16) coNP dy (cid:17) Σ dyi − .Finally, we define PH dy = S i ≥ (cid:16) Σ dyi ∪ Π dyi (cid:17) . △ We refer the reader to Appendix B.3 for a list of dynamic problems that belong to PH dy . As inthe static setting, the successive levels of PH dy are contained within each other. Lemma 9.3.
For each i ≥ , we have Σ dyi ⊆ Σ dyi +1 and Π dyi ⊆ Π dyi +1 .Proof. We use induction on i . The base case is trivial. For i = 1, we have Σ dy = NP dy ⊆ (cid:16) NP dy (cid:17) NP dy = Σ dy and similarly Π dy ⊆ Π dy . Suppose that the lemma holds for some i . We nowobserve that Σ dyi +1 = (cid:16) NP dy (cid:17) Σ dyi ⊆ (cid:16) NP dy (cid:17) Σ dyi +1 = Σ dyi +2 . In this derivation, the second step holdsbecause of our induction hypothesis that Σ dyi ⊆ Σ dyi +1 . Similarly, we can show that our inductionhypothesis implies that Π dyi +1 ⊆ Π dyi +2 . This completes the proof.Similar to the static setting, we can show that if P dy = NP dy , then PH dy collapses to P dy . Theorem 9.4. If P dy = NP dy , then PH dy = P dy .Proof. Throughout the proof, we assume that P dy = NP dy = Σ dy . We use induction on i . For thebase case, we already have Σ dy = P dy . Now, suppose that Σ dyi = P dy for some i ≥
1. Then we get:Σ dyi +1 = (cid:16) NP dy (cid:17) Σ dyi = (cid:16) NP dy (cid:17) P dy = (cid:16) P dy (cid:17) P dy = P dy . Thus, we derive that Σ dyi = P dy for all i ≥
1. Since each problem in Π dyi is a complement of someproblem in Σ dyi , we also infer that Π dyi = P dy for all i ≥
1. This concludes the proof.31 .1 Further result regarding the collapse of PH dy In this section, we prove that PH dy collapses to the second level if NP dy ⊆ coNP dy . Towards thisend, we first state the following important lemma whose proof appears in Section 9.1.1. Lemma 9.5.
We have ( NP dy ) NP dy ∩ coNP dy = NP dy . Theorem 9.6. If NP dy ⊆ coNP dy , then PH dy = coNP dy ∩ NP dy .Proof. Throughout the proof, we assume that NP dy ⊆ coNP dy . This implies that NP dy = NP dy ∩ coNP dy . We now claim: • Σ dyi = NP dy for all i ≥ i . The base case is clearly true, since we have Σ dy = NP dy by definition. By induction hypothesis, suppose that Σ dyi = NP dy for some i ≥
1. But this impliesthat Σ dyi +1 = ( NP dy ) Σ dyi = ( NP dy ) NP dy = ( NP dy ) NP dy ∩ coNP dy = NP dy . We thus conclude that:Σ dyi = NP dy for all i ≥ . (1)Recall that every problem in π i is a complement of some problem in Σ dyi . Hence, Equation (1)implies that: Π dyi = coNP dy for all i ≥ . (2)The theorem follows from Equation (1), Equation (2) and the observation that PH dy is closed undercomplements.We conclude this section with one more lemma that will be useful later on. Its proof is analogousto the proof of Lemma 9.5 and is therefore omitted. Lemma 9.7.
We have ( NP dy ∩ coNP dy ) NP dy ∩ coNP dy = NP dy ∩ coNP dy . Since it is clearly the case that NP dy ⊆ ( NP dy ) NP dy ∩ coNP dy , to complete the proof we only need toshow that ( NP dy ) NP dy ∩ coNP dy = NP dy . Consider any decision problem D ∗ ∈ ( NP dy ) NP dy ∩ coNP dy . Wewill show that D ∗ ∈ NP dy . We begin by setting up some notations that will be used throughoutthe proof. By definition, the problem D ∗ admits a verifier-family V ∗ with the following properties.1. The verifier-family V ∗ uses an oracle-family O for a decision problem D ∈ NP dy ∩ coNP dy with BlowUp V ∗ ( m ) = O (poly( m )). Let n ( m ) = BlowUp V ∗ ( m ). Thus, for each m ≥
1, the verifier V ∗ m uses the oracle O n ( m ) . To ease notation, we simply write n instead of n ( m ).2. The verifier-family V ∗ has Time V ∗ ( m ) = O (polylog( m )) and Space V ∗ ( m ) = O (poly( m )).3. Fix any m ≥
1, and consider any instance-sequence ( I ∗ , . . . , I ∗ k ) of D ∗ m . Suppose that V ∗ m gets I ∗ as input at step t = 0, and the ordered pair (( I ∗ t − , I ∗ t ) , π ∗ t ) as input at each step t ≥ x ∗ t , y ∗ t ) denote the output of the verifier V ∗ n at each step t ≥
0. Then: • For every proof-sequence ( π ∗ , . . . , π ∗ k ), we have x ∗ t = 0 for each t ∈ { , . . . , k } with D ∗ m ( t ) = 0. • If the proof-sequence is reward-maximizing , then we have x ∗ t = 1 for each t ∈ { , . . . , k } with D ∗ m ( t ) = 1. 32. Since D ∈ NP dy ∩ coNP dy , the decision problem D admits two verifier-families V and V ′ thatrespectively satisfy Definition 7.3 and Definition 7.4. While referring to the verifier-families V and V ′ , we use the same notations that were introduced in Definition 7.3 and Definition 7.4.We will now construct a verifier-family ˆ V for the problem D ∗ that does not make any call to anoracle. Fix any m ≥
1. At a high level, instead of using the oracle O n , the verifier ˆ V m uses V n and V ′ n as subroutines in order to simulate the behavior of the verifier V ∗ m . Each time V ∗ m makes a callto the oracle O n , the verifier ˆ V m makes two calls to the verifiers V n and V ′ n for the problem D n .The verifier ˆ V m also checks that the answers returned by V n and V ′ n are consistent with each other. Constructing the verifier ˆ V m : To be more specific, after receiving an instance I ∗ of D ∗ m as inputin the preprocessing step, the verifier ˆ V m simulates the behavior of V ∗ m on the same input I ∗ andreturns the same answer as V ∗ m . Now, suppose that the verifier ˆ V m has received an instance-sequence( I ∗ , . . . , I ∗ t ∗ − ) of D ∗ m and a proof-sequence (ˆ π , . . . , ˆ π t ∗ − ) as input till this point. Furthermore,suppose that the verifier ˆ V m has successfully been able to simulate the behavior of V ∗ m on thesame instance-sequence ( I ∗ , . . . , I ∗ t ∗ − ) and a (different) proof-sequence ( π ∗ , . . . , π ∗ t ∗ − ) till this point.Now, at step t ∗ , the verifier V ∗ m gets an ordered pair (( I ∗ t ∗ − , I ∗ t ∗ ) , π ∗ t ∗ ) as input and the verifier ˆ V m gets an ordered pair (( I ∗ t ∗ − , I ∗ t ∗ ) , ˆ π t ∗ ) as input. Note that the instance-update part of the input atstep t ∗ remains the same for V ∗ m and ˆ V m . In contrast, the proofs π ∗ t ∗ and ˆ π t ∗ differ from each other.The proof ˆ π t ∗ is supposed to consist of π ∗ t ∗ followed by a sequence of proofs { π t , π ′ t } (for V n and V ′ n respectively) corresponding to all the calls to the oracle O n made by V ∗ m .We now proceed with the description of the verifier ˆ V m . After receiving the input (( I ∗ t ∗ − , I ∗ t ∗ ) , ˆ π t ∗ )at step t ∗ , the verifier ˆ V m starts simulating the verifier V ∗ m (also at step t ∗ ) on input (( I ∗ t ∗ − , I ∗ t ∗ ) , π ∗ t ). • During this simulation, whenever V ∗ m calls the oracle O n for D n at a step (say) t with input( I t − , I t ), the verifier ˆ V m calls V n and V ′ n as subroutines respectively with inputs (( I t − , I t ) , π t )and (( I t − , I t ) , π ′ t ). We emphasize that π t and π ′ t are specified within the proof ˆ π t ∗ receivedby ˆ V m . Let ( x t , y t ) and ( x ′ t , y ′ t ) respectively denote the answers returned by V n and V ′ n atthe end of this call. If x t = x ′ t , then we say that the verifier ˆ V m enters into "invalid” mode.Specifically, this means that at every future update-step (including step t ∗ ), the verifier ˆ V m outputs the answer (0 , x t = x ′ t then we claim that: – x t (or, equivalently, x ′ t ) is equal to the answer returned by the call to O n made by V ∗ m .The claim holds because either the call to the oracle O n returns a 0 (in which case item (1)in Definition 7.3 implies that x t = 0), or the call to the oracle O n returns a 1 (in which caseitem (1) in Definition 7.4 implies that x ′ t = 1). This claim ensures that if x t = x ′ t , then theverifier ˆ V m can continue with its simulation of the behavior of V ∗ m . And this is precisely whatthe verifier ˆ V m does in this case.Suppose that the verifier ˆ V m manages to complete the simulation of V ∗ m at step t ∗ without everentering into Invalid mode. Let ( x ∗ t ∗ , y ∗ t ∗ ) denote the answer returned by V ∗ m at step t ∗ . Then theverifier ˆ V m returns the answer (ˆ x t ∗ , ˆ y t ∗ ) at step t ∗ , where ˆ x t ∗ = x ∗ t ∗ and ˆ y t ∗ = 1 y ∗ t ∗ . Here, the “1" infront of y ∗ t ∗ represents the fact that the verifier ˆ V m did not ever enter the Invalid mode. At thispoint, the verifier ˆ V m is ready to handle the next update at step t ∗ + 1. Analysis of correctness:
Suppose that the verifier ˆ V m received the instance-sequence ( I ∗ , . . . , I ∗ t ∗ )and the proof-sequence (ˆ π , . . . , ˆ π t ∗ ) as input till this point. Let ( π ∗ , . . . , π ∗ t ∗ ) denote the correspond-ing proof-sequence received by the verifier V ∗ m till this point. For each k ∈ { , . . . , t ∗ } , let (ˆ x k , ˆ y k )and ( x ∗ k , y ∗ k ) respectively denote the outputs of the verifiers ˆ V m and V ∗ m at step k .33n order to prove Lemma 9.5, we need to show that the verifier ˆ V m for the problem D ∗ m sat-isfies the properties outlined in Definition 7.2 and Definition 7.3. These properties are shownin Claim 9.8, Claim 9.9 and Claim 9.10. Claim . If the proof-sequence (ˆ π , . . . , ˆ π t ∗ ) is reward-maximizing for the verifier ˆ V m w.r.t. theinput-sequence ( I ∗ , . . . , I ∗ t ∗ ), then we have ˆ x k = D ∗ m ( k ) at each step k ∈ { , . . . , k } . Thus, the verifierˆ V m always produces the correct output when it works with a reward-maximizing proof-sequence. Proof.
Throughout the proof, we assume that the proof-sequence (ˆ π , . . . , ˆ π t ∗ ) is reward-maximizingfor ˆ V m w.r.t. the input-sequence ( I ∗ , . . . , I ∗ t ∗ ). This implies that the verifier ˆ V m never enters the Invalid state during steps k ∈ { , . . . , t ∗ } , for the following reason. • Consider any step k ∈ { , . . . , t ∗ } . If the verifier ˆ V m enters into the Invalid state during thisstep, then it outputs (ˆ x k , ˆ y k ) where ˆ x k = ˆ y k = 0. Otherwise, the verifier ˆ V m outputs (ˆ x k , ˆ y k )where ˆ y k starts with a “1”. Hence, the value of ˆ y k is maximized when the verifier ˆ V m does not enter into the Invalid state. Since the entire sequence ( π ∗ , . . . , π ∗ t ∗ ) is reward-maximizing, itfollows that the verifier ˆ V m does not enter into the Invalid state during steps 1 , . . . , t ∗ .Next, we note that the corresponding proof-sequence ( π ∗ , . . . , π ∗ t ∗ ) is also reward-maximizing forthe verifier V ∗ m w.r.t. the same instance-sequence ( I ∗ , . . . , I ∗ t ∗ ), for the following reason. • We have already shown that the verifier ˆ V m never enters the Invalid state during steps1 , . . . , t ∗ . This implies that ˆ y k = 1 y ∗ k for each k ∈ { , . . . , t ∗ } , where (ˆ x k , ˆ y k ) and ( x ∗ k , y ∗ k ) arerespectively the outputs of the verifiers ˆ V m and V ∗ m at step k . Thus, maximizing the valueof ˆ y k is equivalent to maximizing the value of y ∗ k . Since the proof-sequence (ˆ π , . . . , ˆ π t ∗ ) isreward-maximizing for the verifier ˆ V m as per our assumption, it necessarily follows that thecorresponding proof-sequence ( π ∗ , . . . , π ∗ t ∗ ) is also reward-maximizing for the verifier V ∗ m .Since the proof-sequence ( π ∗ , . . . , π ∗ t ∗ ) is reward-maximizing for the verifier V ∗ m w.r.t. the instance-sequence is ( I ∗ , . . . , I ∗ t ∗ ), we infer that x ∗ k = D ∗ m ( I ∗ k ) for each k ∈ { , . . . , t ∗ } . Furthermore, sincethe verifier ˆ V m never enters the Invalid state during steps 1 , . . . , t ∗ , we have ˆ x k = x ∗ k = D ∗ m ( I ∗ k )for each k ∈ { , . . . , t ∗ } . In other words, the verifier ˆ V m always outputs the correct answer when itreceives a reward-maximizing proof-sequence. Claim . Fix any instance-sequence ( I ∗ , . . . , I ∗ t ∗ ). For every proof-sequence (ˆ π , . . . , ˆ π t ∗ ), theverifier ˆ V m outputs ˆ x k = 0 at each step k ∈ { , . . . , t ∗ } where D ∗ m ( I ∗ k ) = 0. Proof.
If the verifier ˆ V m ever enters the Invalid state at some step k ∗ , then it keeps returning theanswer ˆ x k = 0 at every step k ≥ k ∗ . So the claim trivially holds in this case. Thus, throughout therest of the proof, w.l.o.g. we assume that the verifier ˆ V m never enters the Invalid state. However,if this is the case, then we have ˆ x k = x ∗ k at each step k ∈ { , . . . , t ∗ } , where ( x ∗ k , y ∗ k ) is the outputof the verifier V ∗ m at step k when it receives the same instance-sequence ( I ∗ , . . . , I ∗ t ∗ ) and the proof-sequence ( π ∗ , . . . , π ∗ k ) corresponding to (ˆ π , . . . , ˆ π t ∗ ). Now, from the definition of V ∗ m it follows thatˆ x k = x ∗ k = 0 on all the instances I ∗ k where D ∗ m ( I ∗ k ) = 0. Claim . The verifier ˆ V m has space-complexity Space ˆ V ( m ) = O (2 polylog( m ) ) and update-time Time ˆ V ( m ) = O (polylog( m )). 34 roof. The space-complexity of the verifier ˆ V m is dominated by the space-complexities of the sub-routines V n , V ′ n , and that of the verifier V ∗ m . Thus, we have:Space ˆ V ( m ) = Space V ∗ ( m ) + Space V ( n ) + Space V ′ ( n )= O (poly( m ) + poly( n ) + poly( n ))= O (poly( m ))The last equality holds since the verifier V ∗ m uses the oracle O n for the problem D n , and n = BlowUp V ∗ ( m ) = O (poly( m )).Moving on, the verifier V ∗ m has update time O (polylog( m )) and it uses the oracle O n for theproblem D n . In the verifier ˆ V m , each call to the oracle O n is replaced by two calls to the verifiers V n and V ′ n . Since each of the verifiers V n and V ′ n has update time O (polylog( n )), we get: Time ˆ V ( m ) = O (polylog( m )) · O (polylog( n )) = O (polylog( m )) . Again, the last equality holds since n = O (poly( m )).Lemma 9.5 follows from Claim 9.8, Claim 9.9 and Claim 9.10.35 art III NP dy -completeness (in the Bit-probe Model) This part is organized as follows. In Section 10, we define a dynamic problem called “First ShallowDecision Tree" (First-DT dy for short) and show that this problem is NP dy -hard. In Section 11, wedefine another problem called dynamic narrow DNF evaluation problem (or DNF dy for short). Weshow that First-DT dy is P dy -reducible to DNF dy . This means that DNF dy is NP dy -complete. Weconclude this part by explaining (in Section 11.3) why the NP dy -completeness of DNF dy almostimmediately implies that many natural dynamic problems are NP dy -hard.
10 First Shallow Decision Tree: an intermediate problem
This section is organized as follows. In Section 10.1, we define a dynamic problem called FirstShallow Decision Tree (First-DT dy ). In Section 10.2, we show that this problem is NP dy -hard. dy We start with a definition of the First Shallow Decision Tree problem in the static setting. Wedenote this static problem by P ′ , and emphasize that this is not a decision problem. An instance I ∈ P ′ is an ordered pair ( mem I , T I ) such that: • mem I is an array of bits and T I is a collection of decision trees. • Each leaf node v in each decision tree T ∈ T I is labelled with a polylog( |T I | ) bit integer r ( v ) ∈ { , } polylog( |T I | ) . We refer to r ( v ) as the rank of v . This rank r ( v ) is independent ofthe contents of the array mem I . • Each decision tree T ∈ T I operates on the same memory mem I . • All the decision trees in T I are shallow , in the sense that the depth (the maximum length ofa root to leaf path) of each tree T ∈ T I is at most O (poly log |T I | ).For every decision tree T ∈ T I , we define v ∗ T to be the leaf node of the execution-path of T whenit operates on mem I . Note that v ∗ T depends on the contents of the array mem I , since the latterdetermines the execution-path of T when it operates on mem I . The answer P ′ ( I ) to an instance I ∈ P ′ points to the decision tree T ∈ T I which maximizes the rank r ( v ∗ T ). Thus, the answer P ′ ( I )can be encoded using O (log |T I | ) bits. We emphasize that the answer P ′ ( I ) depends on the contentsof the array mem I (which determines the leaf node v ∗ T for every tree T ∈ T I ).Intuitively, in the dynamic version of the problem denoted by First-DT dy , the instance I keepschanging via a sequence of updates where each update flips one bit in the memory mem I , and we haveto keep track of the answer P ′ ( I ) at the current instance I . Below, we give a formal description.For each integer n ≥
1, let P ′ n denote the n -slice of the First-DT dy problem, which consists ofall instances I ∈ P ′ that are encoded using n bits. In the dynamic setting, we impose the followinggraph structure G ′ n = ( U ′ n , E ′ n ) on P ′ n with node-set U ′ n = { , } n : For any two instances I, I ′ ∈ P ′ n ,there is an instance-update ( I, I ′ ) ∈ E ′ n iff T I = T I ′ , and mem I and mem I ′ differ in exactly one bit. Recall the definition of “operating on” from Section 6.2.1. Recall the definition of “execution path" from Section 6.2.1.
36e denote the problem First-DT dy by D ′ = ( P ′ , G ′ ), and the n -slice of this problem by D ′ n =( P ′ n , G ′ n ). We now derive a simple upper bound on the instance-update-size of D ′ , which is consistentwith Assumption 6.7. Corollary 10.1. for every integer n ≥ , we have λ D ′ ( n ) ≤ log n .Proof. There is an instance-update (
I, I ′ ) of D ′ iff mem I and mem I ′ differ in exactly one bit. Hence,the standard encoding of ( I, I ′ ) can be specified using at most log n bits, and we get λ D ′ ( n ) ≤ log n . NP dy -hardness of First-DT dy We will show that one can efficiently solve any problem in NP dy using an oracle for First-DT dy .Specifically, we will prove the following theorem. Theorem 10.2.
Every problem
D ∈ NP dy admits an algorithm-family A that solves D with updatetime Time A ( n ) = O (polylog( n )) and space complexity Space A ( n ) = O (poly( n )) , and uses an oracle-family O ′ for First-DT dy with BlowUp A ( n ) = O (2 polylog( n )) ) . As a corollary of the above theorem, we will derive that First-DT dy is NP dy -hard. Corollary 10.3.
First-DT dy is NP dy -hard. The proofs of Theorem 10.2 and Corollary 10.3 appears in Section 10.3 and Section 10.4.
Throughout the proof, we use the notations and concepts introduced in Section 7. We fix a problem
D ∈ NP dy and construct an algorithm-family A for Theorem 10.2. In more details, we define afunction m : N + → N + of the form m ( n ) = O (2 polylog( n ) ), and show that for every n ∈ N + ,there exists an algorithm A n that solves D n with Time A ( n ) = O (polylog( n )) and Space A ( n ) = O (poly( n )), using the oracle O ′ m ( n ) for First-DT dy . To ease notation, henceforth we write m insteadof m ( n ). Suppose that the algorithm A n is given the instance-sequence ( I , . . . , I k ) of D n . Since D ∈ NP dy ,there exists a verifier V n for D n as per Definition 7.3. At a high-level, the algorithm A n uses theverifier V n as a subroutine and works as follows. Preprocessing step.
At step t = 0, A n does the following:1. Call the verifier subroutine V n with input in V n ( t V n ) = I at step t V n = 0. Then V n returnsan ordered pair ( x , y ).2. Call the oracle O ′ m in a certain way.3. Output A n ( in A n (0)) = x . 37 pdate step. Subsequently, at each step t > A n does the following:1. Call the oracle O ′ m (several times) in a certain way, and use the outputs of these calls to comeup with a reward maximizing proof π t as per Definition 7.3.2. Call the verifier subroutine V n with input in V n ( t V n ) = (( I t − , I t ) , π t ) at step t V n = t . Theverifier V n returns an ordered pair ( x t , y t ).3. Call the oracle O ′ m (several times) in a certain way.4. Output A n ( in A n (0 → t )) = x t .As the verifier V n is given a reward-maximizing proof-sequence as inputs, Definition 7.3 impliesthat the algorithm A n solves D n : Lemma 10.4.
For every t ∈ { , . . . , k } , we have A n ( in A n (0 → t )) = D ( I t ) .Proof. As A n calls V n exactly once at each step, we always have t V n = t . Recall Definition 7.3,observe that if V n is always given a reward-maximizing proof at each step, i.e. ( π , . . . , π k ) is areward-maximizing proof-sequence w.r.t. ( I , . . . , I k ), then x t = D ( I t V n ) for all t V n ∈ { , . . . , k } .Since the algorithm A n outputs x t at every step t , the lemma holds.It now remains to specify Item 2 in the preprocessing step, and Item 1 and Item 3 in the updatestep. These steps are key to constructing the reward-maximizing proof-sequence for V n . We specifythese steps respectively in the subsections below. We specify Item 2 in the preprocessing step of A n as follows: A n calls O ′ m by giving the initialinput I ′ where I ′ = ( mem I ′ , T I ′ ) is an instance of First-DT dy as defined below. • We set mem I ′ = mem V n (0) where mem V n (0) is just the memory state of V n after Item 1 in thepreprocessing step of A n (i.e. after V n returns). • We set T I ′ = T V n , where the collection T V n of decision-trees is defined as follows. Recall thatthe input of V n at step t is of the form (( I t − , I t ) , π t ) consisting of the instance-update ( I t − , I t )and the proof π t , respectively. Let T V n be the decision tree of V n . For each possible proof π ∈ { , } polylog( n ) , let T π be the decision tree obtained from T V n by “fixing” the “proof-part"of the input be to π . More specifically, consider every read node u ∈ T V n whose index pointsto some i -th bit π [ i ] of π (in the proof-part of the input). Let p ( u ) , r ( u ) , l ( u ) be the parent,right child, and left child of u , respectively. To construct T π , we remove u and if π [ i ] = 0, thenadd an edge ( p ( u ) , l ( u )), and remove the subtree rooted at r ( u ). Else, if π [ i ] = 1, then add( p ( u ) , r ( u )) and remove the subtree rooted at l ( u ). We set T V n = { T π | π ∈ { , } polylog( n ) } . • We now define the ranks r ( v ). Consider any proof π ∈ { , } polylog( n ) and the correspond-ing decision tree T π ∈ T V n . Definition 7.3 guarantees that when given any instance-update( I t − , I t ) and the proof π as input, the verifier V n outputs some ordered pair ( x, y ) where x ∈ { , } and y ∈ { , } polylog( n ) . This has the following important implication. Recall the formal description of a decision tree from Section 6.2.1. Consider any leaf node v in the decision tree T π . We can associate an ordered pair ( x v , y v )with this leaf-node v , where x v ∈ { , } and y v ∈ { , } polylog( n ) , such that whenever thedecision-tree T π follows the root-to-leaf execution path ending at the node v , it writes( x v , y v ) in the output memory. Note that the ordered pair ( x v , y v ) does not depend onthe contents of the memory the decision tree T π operates on. We define the rank of aleaf-node v in T π to be: r ( v ) = y v .In order to prove that I ′ is indeed an instance of First-DT dy , it remains to show the following: Lemma 10.5.
All the decision trees T π ∈ T I ′ are shallow.Proof. Note that there are 2 polylog( n ) many decision trees in the collection T I ′ , one for each bitstring π ∈ { , } polylog( n ) . Furthermore, since the verifier V n has O (polylog( n )) update time, eachdecision tree T π ∈ T I ′ has depth O (polylog( n )). Hence, the depth of each tree T π ∈ T I ′ is at most O (poly log |T I ′ | ), which implies that all the trees in T I ′ are shallow according to our definition. We maintain the following two invariants in the beginning of every step t > π , . . . , π t − ) that the verifier V n received so far is the reward-maximizing proof-sequence w.r.t. ( I , . . . , I t − ).2. The First-DT dy instance I ′ = ( mem I ′ , T I ′ ) maintained by O ′ m is such that mem I ′ = mem V n ( t − mem V n ( t −
1) is the memory state of V n after finishing the step t − A n . Specifically, we show how to call O ′ m several times to keep the invariant. After receiving the input in A n ( t ) = ( I t − , I t ) at step t , theprecise description of Item 1 in the update step of A n is as follows: • Write ( I t − , I t ) in the input part mem in V n of mem V n . • Make a sequence of calls to O ′ m to update mem I ′ so that mem I ′ = mem V n .At this point, let T π t ∈ T I ′ = T V n be the output of the oracle O ′ m . We claim the following: Claim . The proof π t is the reward-maximizing proof at step t . Proof.
From Section 10.1 and Section 10.3.2, it follows that the answer from O ′ m is P ′ ( I ′ ) = T π t ,where T π t is the decision tree T ∈ T V n which maximizes the rank r ( v ∗ T ).By the invariant, it holds that the sequence of proofs ( π , . . . , π t − ) that the verifier V n receivedso far is the reward-maximizing proof-sequence w.r.t. ( I , . . . , I t − ). Moreover, recall that ( I t − , I t )is just written the input part mem in V n of mem V n . This implies that π t is the proof which maximizes y t (the second part of the output ( x t , y t ) of the verifier V n when it is given (( I t − , I t ) , π t ) as input instep t ). Hence, we conclude that π t is the desired reward-maximizing proof at step t .Finally, Item 3 in the update step of A n does the following: • Make a sequence of calls to O ′ m to update mem I ′ so that mem I ′ = mem V n .Clearly, this ensures that the invariant is maintained. The contents of the memory determines which root-to-leaf path becomes the execution-path followed by T π , andevery root-to-leaf path is associated with an ordered pair ( x v , y v ) where v is the end leaf-node of the concerned path. A n We are now ready to prove Theorem 10.2. The theorem holds since A n solves D n (Lemma 10.4)and has small blow-up size (Lemma 10.7) and update time and space complexity (Lemma 10.8). Lemma 10.7.
It takes m = O (2 polylog( n ) ) bits to encode the First-DT dy instance I ′ . In other words,the algorithm A n has blow-up size BlowUp A ( n ) = O (cid:16) polylog( n ) (cid:17) .Proof. From the proof of Lemma 10.5, we deduced that there are O (cid:16) polylog( n ) (cid:17) decision treesin the collection T I ′ . Furthermore, each tree T π ∈ T I ′ has depth at most O (polylog( n )), whichimplies that each tree T π ∈ T I ′ contains at most O (cid:16) polylog( n ) (cid:17) nodes. Thus, there are at most O (cid:16) polylog( n ) (cid:17) × O (cid:16) polylog( n ) (cid:17) = O (cid:16) polylog( n ) (cid:17) many nodes over the collection of trees T I ′ . Thisalso implies that the memory mem I ′ contains at most O (cid:16) polylog( n ) (cid:17) bits, for the number of bits in mem I ′ can w.l.o.g. be assumed to be upper bounded by the number of nodes in T I ′ (otherwise, therewill be some bits in mem I ′ that are not accessible to any tree T π ∈ T I ′ ). We therefore conclude thatthe running instance I ′ = ( mem I ′ , T I ′ ) that the algorithm A n asks the oracle O ′ m to maintain ananswer to can be encoded using m = O (cid:16) polylog( n ) (cid:17) bits. Lemma 10.8.
The algorithm A n has update time O (polylog( n )) and space complexity O (poly( n )) .Proof. We analyze the time of A n in the update steps. In Item 1, the algorithm A n writes ( I t − , I t )in the input-part mem in V n . Since ( I t − , I t ) is specified using O (polylog( n )) bits (see Assumption 6.7),the algorithm A n makes at most O (polylog( n )) many calls to O ′ to update mem I ′ = mem V n . In Item 2,the algorithm A n calls V n which takes O (polylog( n )) time. Hence, the call to V n changes at most O (polylog( n )) bits in mem V n . Accordingly, in Item 3, the algorithm A n makes at most O (polylog( n ))many calls to update mem I ′ = mem V n . Thus, in total the algorithm A n takes O (polylog( n )) time.Finally, the space complexity of A n is given by: Space A ( n ) = O (Space V ( n )) = O (poly( n )).The last equality follows from Definition 7.3. Consider any
D ∈ NP dy . Throughout this section, we assume that First-DT dy ∈ P dy . Under thisassumption, we will prove: D ∈ P dy . This will imply that First-DT dy is NP dy -hard.As per Theorem 10.2, there is an algorithm-family A that solves D using an oracle O ′ forFirst-DT dy , with update time Time A ( n ) = O (polylog( n )), space complexity Space A ( n ) = O (poly( n )),and blow-up size BlowUp A ( n ) = O (2 polylog( n ) ). Throughout the rest of the proof, fix any integer n ≥ m = m ( n ) = BlowUp A ( n ) = O (2 polylog( n ) ). Thus, the algorithm A n solves D n byusing the oracle O ′ m for First-DT dy . As described in Section 10.3.2, the oracel O ′ m keeps track ofthe output of the First-DT dy problem on the instance ( mem I ′ , T I ′ ).Since First-DT dy ∈ P dy , it follows that there is an algorithm-family A ′ that solves First-DT dy with pologarithmic update time and polynomial space complexity. Using this fact, we will nowdesign an algorithm A ∗ n for D n that has O (polylog( n )) update time and O (poly( n )) space complexity,and does not use any oracle. This will imply that D ∈ P dy , thereby concluding the proof of thecorollary.As a first attempt, let us try to design A ∗ n as follows. The algorithm A ∗ n mimics the behaviorof the algorithm A n (as specified in Section 10.3). The only difference is that instead of using theoracle O ′ m , the algorithm A ∗ n uses A ′ m as a subroutine. To be more specific, whenever the algorithm A n calls the oracle O ′ m (see Section 10.3), the algorithm A ∗ n calls the subroutine A ′ m . Clearly, if we40esign A ∗ n in this manner, then it will always give the same output as A n . Unfortunately, however,we have m = 2 polylog( n ) , and thus if we design A ∗ n in this manner then the space complexity of A ∗ n will be dominated by Space A ′ ( m ) = O (poly( m )) = O (2 polylog( n ) ). To address this concern, weensure that A ∗ n uses the subroutine A ′ m in a white box manner. In particular: • The subroutine A ′ m does not have direct access to its memory mem A ′ m . Indeed, since m =2 polylog( n ) and Space A ′ ( m ) = O (poly( m )), there are poly( m ) = 2 polylog( n ) bits in mem A ′ m . Aswe want A ∗ n to have poly( n ) space complexity, we cannot afford to store all the bits of mem A ′ m . • During a call to the subroutine A ′ m , whenever A ′ m wants to read the content of (say) the i th bit in mem A ′ m , it passes the value of i to the algorithm A ∗ n , and the algorithm A ∗ n returns thecontent of the i th bit in mem A ′ m to A ′ m by calling a different subroutine S ∗ n (to be describedbelow). Similarly, whenever A ′ m wants to write (say) the i th bit of mem A ′ m with some bit b ∈ { , } , then it passes the ordered pair ( i, b ) to the algorithm A ∗ n , and the algorithm A ∗ n in turn calls the subroutine S ∗ n with ( i, b ) as input to handle this operation.To summarize, the subroutine S ∗ n acts as an interface between A ′ m and its memory mem A ′ m .The crucial point is this: Although mem A ′ m is of size poly( m ) = 2 polylog( n ) , the memory mem S ∗ n of the subroutine S ∗ n itself will be of size poly( n ). Hence, the subroutine S ∗ n is able to storeonly a tiny fraction of the memory bits in mem A ′ m . In spite of this severe restriction, wewill show that we can design such a subroutine S ∗ n with polylog( n ) update time to act as aninterface between A ′ m and mem A ′ m , as long as A ∗ n has to deal with at most poly( n ) update-stepsfor D n . It now remains to describe the subroutine S ∗ n . Towards this end, we first need to define the notionof a canonical instance for the First-DT dy problem with respect to the problem D . Canonical instance I ′ : From Section 10.3.2, recall that the oracle O ′ m (and hence the subroutine A ′ m ) deals with input instances of the form ( mem I ′ , T I ′ ). Note that mem I ′ contains only poly( n ) manybits. This is because the memory mem V n of the verifier V n is of poly( n ) size and mem I ′ reflects thestate of mem V n . The total size of the decision trees in T I ′ , however, is 2 polylog( n ) , and this is thereason why we have m = O (2 polylog( n ) ). The canonical instance of First-DT dy with respect to D n is defined as I ′ = ( mem I ′ , T I ′ ), where all the bits in mem I ′ are set to mem A ′ m ( I ′ ) denote the state of the memory mem A ′ m when A ′ m receives the canonical instance I ′ as input in the preprocessing step . Crucially, note that the contents of mem A ′ m ( I ′ ) is completelydetermined by the n -slice D n of the problem D . Since there are poly( m ) = 2 polylog( n ) bits in mem A ′ m , it is easy to design a subroutine Z ∗ n in the bit-probe model that takes as input an index i ∈ { , } polylog( n ) and returns the contents of the i th bit of mem A ′ m ( I ′ ). Specifically, the decisiontree of Z ∗ n will have depth polylog( n ), and each leaf in this decision tree will correspond to a uniqueindex i ∈ { , } polylog( n ) . The leaf corresponding to i ∈ { , } polylog( n ) will contain the i th bit of mem A ′ m ( I ′ ). Since contents of mem A ′ m ( I ′ ) do not change (it is determined by I ′ ), the subroutine Z ∗ n will need to access its own memory mem Z ∗ n only when reading the input i ∈ { , } polylog( n ) andproducing the desired output b ∈ { , } . In other words, the subroutine Z ∗ n returns the content ofany given bit of mem A ′ m ( I ′ ) in O (polylog( n )) time, and it only uses O (polylog( n )) space. The set Y : The subroutine S ∗ n will also store a collection of ordered pairs of the form ( i, b ), where i ∈ { , } polylog( n ) and b ∈ { , } , in a set Y . The set Y will be maintained as a balanced search tree.The subroutine S ∗ n will use Y , along with the subroutine Z ∗ n describe above, to return the contentof a given bit of mem A ′ m . Specifically, suppose that the subroutine S ∗ n is asked for the content of the i th bit in mem A ′ m . It will first check if the set Y contains an ordered pair of the form ( i, b ). If yes,41hen the subroutine S ∗ n will return b as output. If no, then the subroutine S ∗ n will return the i th bitof mem A ′ m ( I ′ ) as output (after making a single call to Z ∗ n with input i ).We are now ready to state the subroutine S ∗ n in details. See Section 10.4.1. S ∗ n The job of the subroutine S ∗ n is to act as an interface between A ′ m and mem A ′ m . We assume that A ′ m gets I ′ as input in the preprocessing step. (In Section 10.4.2 we will get rid of this assumption.) Initialization: A ′ m gets I ′ as input in the preprocessing step. At this point, we have mem A ′ m = mem A ′ m ( I ′ ), and the subroutine S ∗ n sets Y = ∅ . Writing a bit in mem A ′ m : Suppose that A ′ m wants to write b ∈ { , } in the i th bit of mem A ′ m .Accordingly, we call the subroutine S ∗ n with the ordered pair ( i, b ) as input. The subroutine S ∗ n firstchecks if there is any ordered pair of the form ( i, b ′ ) in the set Y , and if yes, then it deletes thatordered pair ( i, b ′ ) from Y . Next, it inserts the ordered pair ( i, b ) into Y . Reading a bit from mem A ′ m : Suppose that A ′ m wants to read the content of the i th bit in mem A ′ m .Accordingly, we call the subroutine S ∗ n with i as input. If this bit was modified by A ′ m after theinitialization step, then there is an ordered pair of the form ( i, b ) in the set Y where b ∈ { , } denotes the current content of the i th bit of mem A ′ m . Accordingly, the subroutine S ∗ n first searchesfor an ordered pair of the form ( i, b ) , b ∈ { , } , in the set Y . If it finds such an ordered pair ( i, b )in Y , then it returns b as the output. Otherwise, if it fails to find such an ordered pair in Y , then A ′ m has not modified the i th bit in mem A ′ m since the initialization step. Accordingly, in this event S ∗ n returns the content of the i th bit of mem A ′ m ( I ′ ) after calling the subroutine Z ∗ n with input i . Update time and space complexity of S ∗ n : The key observation is this. Since the initializationstep described above, suppose that A ′ m has made at most poly( n ) many (read/write) probes toits memory mem A ′ m . To implement each such probe one call was made to the subroutine S ∗ n , forit acts as an interface between A ′ m and its memory mem A ′ m . Then the set Y is of size at mostpoly( n ), because each call to S ∗ n can add at most one ordered pair to Y , and initially we had Y = ∅ .Now, the space of complexity of the subroutine S ∗ n is dominated by the space needed to store theset Y , and its update time is at most O (polylog( n )) + O (log | Y | ). The O (polylog( n )) term comesfrom the fact that a call to Z ∗ n takes O (polylog( n )) time, whereas the O (log | Y | ) term comes fromthe fact that the set Y is stored as a balanced search tree, and hence searching for an elementin Y takes O (log | Y | ) time. Thus, as long as we ensure that A ′ m makes poly( n ) many probes toits memory mem A ′ m , it will hold that the subroutine S ∗ n can act as an interface between A ′ m andits memory mem A ′ m with update time = O (polylog( n )) + O (log | Y | ) = O (polylog( n )) and spacecomplexity = O ( | Y | ) = O (poly( n )). S ∗ n is used by the algorithm A ∗ n The algorithm A ∗ n mimics the behavior of the algorithm A n from Section 10.3, with two differences.1. Whenever A n calls the oracle O ′ m , A ∗ n calls the subroutine A ′ m in a white box manner.2. The subroutine S ∗ n acts as an interface between A ′ m and mem A ′ m .From the discussion in Section 10.4.1, it becomes clear that as long as A ′ m makes at most poly( n )many probes to mem A ′ m , the space complexity of S ∗ n will be bounded by O (poly( n )). We now showhow to enforce this condition. 42 reprocessing step for A ∗ n : Recall the discussion in Section 10.3.2. Here, A ∗ n wants to initialize A ′ m with the input I ′ = ( mem I ′ , T I ′ ), where mem I ′ = mem V n (0). This is done as follows.As in Section 10.4.1, A ∗ n starts by giving I ′ as input to A ′ m . Recall that I ′ = ( mem I ′ , T I ′ ), whereeach bit in mem I ′ is set to 0. Furthermore, recall that mem I ′ consists of poly( n ) many bits, since mem I ′ is supposed to reflect the state of mem V n , which in turn has at most poly( n ) bits. Accordingly, A ∗ n now asks the subroutine A ′ m to handle poly( n ) many instance-updates on I ′ = ( mem I ′ , T I ′ ),where each instance-update changes one bit in mem I ′ in such a way that: At the end of theseinstance-updates, the subroutine A ′ m ends up with the input ( mem V n (0) , T I ′ ).This is how A ∗ n initializes the subroutine A ′ m . (For comparison, recall how A n initializes theoracle O ′ m in Section 10.3.2.) Note that until this point, A ′ m clearly makes at most poly( n ) manyprobes to mem A ′ m , and hence the space complexity of S ∗ n is at most O (poly( n )). Update-steps for A ∗ n : The algorithm A ∗ n mimics the behavior of A n during an update-step at time t > A ∗ n and A n have been emphasized in Item 1and Item 2 above. It follows from the proof of Lemma 10.8 that A ∗ n makes O (polylog( n )) many callsto A ′ m during each update-step. Since A ′ m has update time O (polylog( m )) = O (polylog(2 polylog( n ) )) = O (polylog( n )), we conclude that A ′ m makes at most O (polylog( n )) · O (polylog( n )) = O (polylog( n ))many probes to A ′ m during each update-step of A ∗ n .From the discussion above, we reach the following conclusion. As long as A ∗ n has to deal with atmost poly( n ) many update-steps, A ′ m makes at most poly( n ) · polylog( n ) = poly( n ) many probesto A ′ m in total, and so the space complexity and the update time of the subroutine S ∗ n remains atmost poly( n ) and polylog( n ) respectively. Bounding the space-complexity and the update time of A ∗ n : Consider a sequence of at mostpoly( n ) many instance-updates given to A ∗ n as input (see Assumption 6.8). The space complexityof A ∗ n is bounded by the sum of the space complexities of the verifier V n and the subroutine S ∗ n . Bydefinition, we have Space V ( n ) = O (poly( n )) since D ∈ NP dy . From the discussion above, it followsthat the space complexity of S ∗ n is also at most O (poly( n )). Thus, we conclude that the overallspace complexity of A ∗ n is also O (poly( n )).To bound the update time of A ∗ n , recall the proof of Lemma 10.8. During each update-step at t >
0, the algorithm A ∗ n spends at most O (polylog( n )) time (excluding the calls to A ′ m ) and makes O (polylog( n )) many calls to A ′ m . Since A ′ m has update time O (polylog( m )) = O (polylog(2 polylog( n ) )) = O (polylog( n )), each call to A ′ m requires O (polylog( n )) time (excludingthe calls to S ∗ n ) and requires a further O (polylog( n )) calls to the subroutine S ∗ n . Finally, under As-sumption 6.8 we have already shown in Section 10.4.2 that each call to S ∗ n takes O (polylog( n )) time.Thus, we conclude that the update time of A ∗ n is at most O (polylog( n )).To summarize, the algorithm A ∗ n correctly solves D n on any sequence of instance-updates oflength at most poly( n ). It has polylog( n ) update time and poly( n ) space complexity. Thus, wehave D ∈ P dy (see Assumption 6.8 and Definition 7.1).In other words, if First-DT dy ∈ P dy then every problem D ∈ NP dy belongs to the class P dy .Hence, we derive that First-DT dy is NP dy -hard. NP dy -complete/hard Problems In this section, we consider a problem called the dynamic narrow DNF evaluation problem (orDNF dy for short). In Section 11.1, we show that this problem is NP dy -complete, which, in turn,implies NP dy -hardness of many other problems (see Corollary 11.10 in Section 11.3).43 NF formula: A w -width DNF formula F with n variables and m clauses is defined as follows.Let X = { x , . . . , x n } be the set of variables and C = { C , . . . , C m } be the set of clauses. We have F = C ∨ · · · ∨ C m , where each clause C j is a conjunction (AND) of at most w literals (i.e. x i or ¬ x i where x i ∈ X ). Let φ : X → { , } be an assignment of variables. Let C j ( φ ) ∈ { , } be the valueof a clause C j after assigning the value of each x i with φ ( x i ). If C j ( φ ) = 1, then we say that C j is satisfied by φ . Similarly, let F ( φ ) = C ( φ ) ∨ · · · ∨ C m ( φ ) be the value of F under the assignment φ .We say that F is satisfied by φ if F ( φ ) = 1. Definition 11.1.
In the dynamic narrow DNF evaluation problem (DNF dy ), we are first given • a w -width DNF formula F over n variables X = { x , . . . , x n } and m clauses C = { C , . . . , C m } ,where w = polylog( m ), and • an assignment φ : X → { , } .An update is denoted by ( i, b ) ∈ [ n ] × { , } , which modifies the assignment φ by setting φ ( x i ) = b .After each update, we ask if F ( φ ) = 1. △ Note that, for any instance (
F, φ ) of DNF dy , we can assume w.l.o.g. that n ≤ mw = ˜ O ( m )because we can ignore all variables that are not in any clause. So we have: Proposition 11.2.
An instance ( F, φ ) where F has m clauses can be represented using ˜ O ( m ) bits.Also, there is a trivial dynamic algorithm for the DNF dy problem with ˜ O ( m ) update time. NP dy -completeness of DNF dy Our main result in this section is summarized below.
Theorem 11.3.
The DNF dy problem is NP dy -complete. It is easy to see that DNF dy ∈ NP dy . Let ( F, φ ) be an instance of DNF dy where F has n variablesand m clauses, which is represented using m ′ = ˜ O ( m ) bits. After each update, the proof is simplyan index j where C j ( φ ) = 1. Then, a verifier V m ′ accepts (i.e. outputs 1) if C j ( φ ) = 1, otherwise V m ′ rejects (i.e. outputs 0). In order to check the value of C j ( φ ), the verifier V m ′ only needs toread the (at most w ) literals of C j which takes O ( w ) = polylog( m ) = polylog( m ′ ) time. Hence, theupdate time of the verifier V m ′ is O (polylog( m ′ )). This shows that DNF dy ∈ NP dy .We devote the rest of this section towards showing that the DNF dy problem is NP dy -hard. Wedo this in steps. We first define an intermediate problem called First-DNF dy . Then we give a P dy -reduction from First-DT dy to First-DNF dy (see Lemma 11.4) and then from First-DNF dy toDNF dy (see Lemma 11.5). This implies that there is a P dy -reduction from First-DT dy to DNF dy (see Proposition 8.2). Since First-DT dy is NP dy -hard (see Theorem 10.2), we derive that DNF dy isalso NP dy -hard (see Corollary 8.5). We have already established that DNF dy is in NP dy , and hencewe conclude that DNF dy is NP dy -complete. The First-DNF dy problem: The definition of First-DNF dy is the same as DNF dy except thefollowing. There is a total order ≺ defined on the set of clauses C . After each update, instead, wemust return the first clause C j (according to the total order ≺ ) which has C j ( φ ) = 1, provided sucha clause exists, and 0 otherwise (which indicates that F ( φ ) = 0). Lemma 11.4.
First-DT dy is P dy -reducible to First-DNF dy . roof. Let ( mem , T ) be an instance of First-DT dy . We construct an instance ( F, φ ) of First-DNF dy as follows. For each decision tree T ∈ T , we construct a DNF formula F T using a standard mappingfrom decision trees to DNF instances (see e.g. [O’D14, Proposition 4.5]). More precisely, for everyroot-to-leaf path P in T , the DNF formula F T contains a conjunctive clause C P . The set of literalsin C P correspond to the set of read nodes in P , as follows. (Recall the formal description of adecision tree from Section 6.2.1.) • Suppose that u is a read node in P with index i u , and v is u ’s child and v ∈ P . If v is a leftchild of u , then C P contains ¬ x i u . Otherwise, if v is a right child of u , then C P contains x i u .Finally, we associate a rank τ ( C P ) = r ( v ′ ) with this clause that is equal to the rank r ( v ′ ) of theleaf-node v ′ in the path P . We set F = W T ∈T F T . The total order ≺ on the set of clauses isdefined in such a way which ensures that for any two clauses C P and C P ′ , we have C P ≺ C P ′ iff τ ( C P ) ≥ τ ( C P ′ ).Let X = { x , . . . , x n } be the variables corresponding to each bit of mem . That is, let φ be suchthat φ ( x i ) = mem [ i ]. Given the update to mem in the problem First-DT dy , we update φ accordingly.Suppose that C P is the first clause among all clauses in F (according to the total order ≺ ) where C P ( φ ) = 1. Furthermore, suppose that the path P corresponds to the decision tree T . We can findthe path P by calling the oracle for First-DNF dy . Clearly, P is the execution path of T when itoperates on mem , and v ∗ T is the leaf-node of the path P . Now, the total order ≺ is defined in such away which ensures that T is the decision tree that maximizes r ( v ∗ T ). Thus, we derive that T is thecurrent output for the problem First-DT dy . Lemma 11.5.
First-DNF dy is P dy -reducible to DNF dy .Proof. Let (
F, φ ) be an instance of First-DNF dy where F has n variables X = { x , . . . , x n } and m clauses C = { C , . . . , C m } . W.l.o.g., we assume that the total order ≺ defined over C is such that C ≺ · · · ≺ C m . In other words, after each update we have to return the first satisfied clause (theone with the minimum index), provided such a clause exists. We construct an instance of DNF dy ( F ′ , φ ′ ) as follows. Let F ′ have n + 2 log m variables X ′ = ( x , . . . , x n , s , . . . , s m , s , . . . , s m ).We call the variables s bi the search variables . They are for “searching for the first satisfied clause”. F ′ has m clauses. For each j ∈ [ m ], we write the binary expansion of j = j j . . . j log m . Foreach clause C j in F , we construct C ′ j = C j ∧ log m ^ i =1 s j i i . Let φ ′ be such that φ ′ ( x i ) = φ ( x i ) for all i ∈ [ m ] and φ ′ ( s bi ) = 1 for all i ∈ [log m ] and b ∈ { , } .Given an update of φ , we update φ ′ accordingly so that φ ′ and φ agree on X . As we set all searchvariables to 1, at this point we have F ( φ ) = F ′ ( φ ′ ). So if F ( φ ) = 0, then we notice this by lookingat F ′ ( φ ′ ). But if F ( φ ) = 1, then we need to find the first index j where C j ( φ ) = 1. To do this, weapply the following binary search trick.We repeat the following steps from i = 1 until log m .1. Set φ ′ ( s i ) = 0 (i.e. all C j ’s where j = 1 are “killed”).2. If F ′ ( φ ′ ) = 0, then there is no j where C j ( φ ) = 1 and j = 0. Set φ ′ ( s i ) = 0 and φ ′ ( s i ) = 1(i.e. all C j ’s where j = 0 are “killed”).3. Else, F ′ ( φ ′ ) = 1, then there is some j where C j ( φ ) = 1 and j = 0. Set φ ′ ( s i ) = 1 and φ ′ ( s i ) = 0 (i.e. all C j ’s where j = 1 are “killed”).45s in the above binary search, we always have a “preference” for the “first half”. So we will obtainthe first j where C j ( φ ) = 1. dy In this section, we define three problems (OV dy , Indep dy and Allwhite dy ) which are just differentformulations of the same problem DNF dy . However, these different views are useful for showingreductions between DNF dy and other problems.We define a problem called the dynamic all-white problem (Allwhite dy ): Definition 11.6.
In the dynamic all-white problem (Allwhite dy ), we are first given • a bipartite graph G = ( L, R, E ) where | L | = n , | R | = m and deg( u ) = polylog( m ) for each u ∈ R , and • each node u ∈ L is colored black or whiteThen, the color of each node in L can be updated. After each update, we ask there is a nodein R whose neighbors are all white. △ Next, for a matrix V = ( v , . . . , v m ) ∈ { , } n × m , v j denote the j -th column of V and nnz ( v j )be the number of non-zero in v j . The following problem is called the dynamic sparse orthogonalvector problem (OV dy ) : Definition 11.7.
In the dynamic sparse orthogonal vector problem ( OV dy ), we are first given • a matrix V ∈ { , } n × m where nnz ( v j ) = polylog( m ) for each j ∈ [ m ], and • a vector u ∈ { , } n Then, each entry of u updated. After each update, we ask if there is a column v in V orthogonalto u , i.e. u T v = 0. △ Next, given a hypergraph H = ( V, E ). We say that a set S in independent in H if there is no e ∈ E such that e ⊆ S . We define a problem called the independent set query problem (Indep dy ). Definition 11.8.
In the independent set query problem (Indep dy ), we are first given • a hypergraph H = ( V, E ) where | V | = n , | E | = m and | e | = polylog( m ) for each edge e ∈ E ,and • a set of node S ⊆ V Then, the set S can be updated by inserting or deleting a node to/from S . After each update,we ask if S is independent in H . △ All three problems are the same problem with different representation, so it holds that:
Proposition 11.9.
An algorithm with the update and query time are at most O ( u ( m )) for anyone of the following problems implies algorithms with the same update and query time for all otherproblems: . DNF dy on an formula F with m clauses,2. Allwhite dy on an graph G = ( L, R, E ) where | R | = m clauses,3. Indep dy on a hypergraph H with m edges, and4. OV dy on a matrix V ∈ { , } n × m . As the proof is very straightforward, we defer it to Appendix C. NP dy -hardness of Some Known Dynamic Problems In [AW14], Abboud and Williams show SETH-hardness for all of the problems in Table 3. Intheir reduction, they actually show a P dy -reduction from OV dy to these problems. Therefore, weimmediately obtain the following:
Corollary 11.10.
All problems in Table 3 are NP dy -hard. For 3 vs. 4 diameter and ST -reach problems, they actually show a stronger reduction from a dynamic version ofthe 3-OV problem, and not OV dy . But OV dy is trivially reducible to this dynamic version of 3-OV. art IV Further Complexity Classes
In this part, we define some more complexity classes including randomized classes and classes ofsearch problems. This formalization is needed for arguing about the complexity of connectivityproblems in Part V.
12 Randomized Classes
In this section, we define the randomized version of the classes P dy and NP dy which are BPP dy and AM dy respectively. The only difference between algorithms for problems in P dy and BPP dy is that,for BPP dy , algorithms can be randomized, and we only require the answers of the algorithms to becorrect with probability at least 1 − /n when the instance is of size n . The same analogy goesfor the difference between NP dy and AM dy .First, we formally define randomized algorithm-families . A randomized algorithm-family A is just a algorithm-family such that, for each n ≥
1, at step t = 0, A n is additionally given arandom string r ∈ { , } poly( n ) , and at step t ≥ A n is additionally given a random string r t ∈ { , } polylog( n ) . The internal states and the answers of A n at each step can depend on previousrandom strings. Hence, at the step t , the answer x t is a random variable. We can formally definerandomized verifier-families as a randomized counterpart of deterministic verifier-family defined inDefinition 7.2 exactly the same way. Definition 12.1 (Class
BPP dy ) . A decision problem D is in BPP dy iff it admits a randomizedalgorithm-family A with update-time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n )). On an instance of size n , for each step t , the answer x t of A n must be correct withprobability at least 1 − /n , i.e. Pr[ x t = D ( I t )] ≥ − /n for each t . △ Definition 12.2 (Class AM dy ) . A decision problem D is in AM dy iff it admits a randomizedverifier-family V with update-time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n )) which satisfy the following properties for each n ≥
1. Fix any instance-sequence( I , . . . , I k ) of D n . Suppose that V n gets I as input at step t = 0, and (( I t − , I t ) , π t ) as inputat every step t ≥
1. Then:1. For every proof-sequence ( π , . . . , π k ), we have x t = 0 with probability at least 1 − /n foreach t ∈ { , . . . , k } where D n ( I t ) = 0.2. If the proof-sequence ( π , . . . , π k ) is reward-maximizing (defined as in Definition 7.3 andrepeated below), then we have x t = 1 with probability at least 1 − /n for each t ∈ { , . . . , k } with D n ( I t ) = 1.The proof-sequence ( π , . . . , π k ) is reward-maximizing iff at each step t ≥
1, given the past history( I , . . . , I t ), ( r , . . . , r t ), and ( π , . . . , π t − ), the proof π t is chosen in such a way that maximizes y t (when we think of y t as a polylog( n ) bit integer). We say that such a proof π t is reward-maximizing . △ Being correct with probability ≥ / π t at step t can depend on all the random choices r ,. . . , r t up to the current step t . We emphasize that it is the prover who “sees” the previousrandom choices. The adversary only sees the answers of the algorithm.By definition of AM dy , we have the following: Proposition 12.3. ( NP dy ) BPP dy ⊆ AM dy . Next, we list a randomized counterpart of Lemmas 9.5 and 9.7 and Theorem 9.6. The proofsgo exactly the same and hence are omitted.
Proposition 12.4. ( AM dy ∩ coAM dy ) AM dy ∩ coAM dy = AM dy ∩ coAM dy and ( AM dy ) AM dy ∩ coAM dy = AM dy . Proposition 12.5. If NP dy ⊆ coAM dy , then PH dy ⊆ AM dy ∩ coAM dy .
13 Search Problems
In this section, we define complexity classes of dynamic search problems . Intuitively, dynamic searchproblems are problems of maintaining some objects under updates. For example, in graphs, thereare problems of maintaining spanning forests, matchings, shortest-paths trees, etc. The importantcharacteristic of dynamic search problems is they the size of maintained object are usually largerthan the update time. Hence, at each step, the algorithm only outputs how the object should bechanged. Search problems are very natural in the literature of dynamic algorithms and, perhaps,even more well-studied than their decision version.
Dynamic search problems and algorithms.
A dynamic search problem D is a dynamicdecision problem except that the answer on the instance I denoted by D ( I ) may not be only 0 or 1.For each I , D ( I ) can be an arbitrary set. For any x ∈ D ( I ), we say that x is a correct answer for I .An algorithm A for the search problem D is defined as an algorithm for decision problems exceptfor one difference. The difference is that A will designate a part of its memory for the answer forthe search problem . At step t , we denote such answer by x t and A can update by reading andwriting on x t at each step. Note that x t can be a large string. We say that A solves D if, every step t , x t ∈ D ( I t ) for any instance sequence ( I , I , . . . ). We define a verifier and a randomized verifierfor search problems in similar way. Complexity Classes.
The search version of P dy , NP dy and BPP dy are denoted by FP dy , FNP dy and FBPP dy respectively. Another important class of search problems is the class TFNP dy . We canintuitively think of this class as a search version of NP dy ∩ coNP dy . The definitions are motivatedfrom the definitions in the static setting (see e.g. [MP91, GGH18] and [Ric08]). Definition 13.1 (Class FP dy ) . A search problem D is in FP dy iff it admits a algorithm-family A with update-time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n )). Onan instance of size n , for each step t , the answer x t of A n must be correct, i.e. x t ∈ D ( I t ). △ Definition 13.2 (Class
FBPP dy ) . A search problem D is in FBPP dy iff it admits a randomizedalgorithm-family A with update-time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n )). On an instance of size n , for each step t , the answer x t of A n must be correct withprobability at least 1 − /n , i.e. Pr[ x t ∈ D ( I t )] ≥ − /n for each t . △ emark . For search problems that can be solved withrandomized algorithms, there is an important distinction between the two models of adversaries.We say that an adversary is oblivious if the updates it generates must not depend on the previousoutputs of the algorithms. Otherwise, we say that an adversary is adaptive . Observe that, inall definitions in this paper, we never assume oblivious adversaries and so algorithms must workagainst with adaptive adversaries.
Definition 13.3 (Class
FNP dy ) . A search problem D is in FNP dy iff it admits a verifier-family V with update-time Time A ( n ) = O (polylog( n )) and space-complexity Space A ( n ) = O (poly( n ))which satisfy the following properties for each n ≥
1. Fix any instance-sequence ( I , . . . , I k ) of D n .Suppose that V n gets I as input at step t = 0, and (( I t − , I t ) , π t ) as input at every step t ≥ π , . . . , π k ) is reward-maximizing (defined as in Definition 7.3), thenwe have x t ∈ D n ( I t ) for each t ∈ { , . . . , k } where D n ( I t ) = ∅ . △ Note that it make sense to say that a search problem D is NP dy -hard under P dy -reduction. Thisjust means that NP dy ⊆ ( P dy ) D , i.e. any decision problem in NP dy can be solved efficiently givenan oracle to D . Example 13.4.
The search problems First-DT dy and First-DNF dy are NP dy -hard. This is shownin Sections 10 and 11. Note that, both First-DT dy and First-DNF dy are in FNP dy . △ We say that a search problem D is total if for any instance I , D ( I ) = ∅ . Definition 13.5 (Class
TFNP dy ) . A search problem D is in TFNP dy iff D ∈
FNP dy and D istotal. △ The class
TFNP dy is a search version of NP dy ∩ coNP dy for the same reason as in the staticsetting (see [MP91]). Example 13.6.
Dynamic spanning tree is not total, because there is no spanning tree if a graphis not connected. However, dynamic spanning forest is total. Dynamic spanning tree is in
FNP dy using the same algorithm which shows that dynamic connectivity is in NP dy . It is not known ifdynamic spanning forest is in TFNP dy . We will show in Section 14 that dynamic spanning forest isin TFNP dy iff dynamic connectivity is in coNP dy . △ The proof of the following observation is basically the same as Lemmas 9.5 and 9.7. We justhave an oracle for search problem.
Proposition 13.7. ( P dy ) TFNP dy ⊆ NP dy ∩ coNP dy . Proposition 13.8. ( NP dy ) TFNP dy = NP dy . art V A Coarse-grained Approach to ConnectivityProblems.
In contrast to the fine-grained complexity which is problem-centric, the coarse-grained approachin this paper is resource-centric. The goal of this part, however, is to show that this approach ishelpful for understanding the complexity of specific problems as well (even though these problemsare not known to be complete for any class).As a show-case, we give new observations to the well-studied dynamic connectivity and k -edge connectivity problems. More specifically, we show a new “nondeterministic equivalence” be-tween dynamic connectivity and dynamic spanning forest and its consequences. We show thatnon-determinism together with randomization can help speeding up the best current dynamic k -edge connectivity algorithms from ˜ O ( √ n ) to polylogarithmic time. Then, by applying our resultson coarse-grained complexity theory, this implies a certain fine-grained “non-hardness” result ofdynamic k -edge connectivity and other problems.
14 Is dynamic connectivity in coNP dy ? Dynamic connectivity (
Conn dy ) is one of the most well-studied dynamic graph problems. In thissection, we study this problem from non-deterministic perspective. Previous works.
Here, we briefly review the best update time of this problem in the worst-casesetting. Kapron, King and Mountjoy [KKM13] show a Monte Carlo algorithm with polylogarithmicworst-case time for
Conn dy . In other words, they show the following (see Proposition B.1 for theformal proof): Theorem 14.1 ([KKM13]) . Conn dy ∈ BPP dy . Nanongkai, Saranurak and Wulff-Nilsen [NSW17] show a Las Vegas algorithm for dynamic min-imum spanning forest (which implies dynamic connectivity). On an n -node graph, their algorithmhas n o (log log log n/ log log n ) worst-case update time, which is later slightly improved to n O (log log n/ √ log n )[SW19]. However, sub-polynomial update time are not captured by our complexity classes. Fordeterministic algorithm, the best update time is still O ( √ n · log log n √ log n ) [KKPT16], slightly improvingthe long-standing O ( √ n ) bound by [Fre85, EGIN92]. The ultimate goal in this line of research is toshow that there is deterministic algorithm with polylog( n ) worst-case update time, as in the casewhen amortized update time is allowed [HdLT98]. In our language, this question translates to thefollowing: Question 14.2. Is Conn dy ∈ P dy ? When we allow non-determinism, it is easy to show that
Conn dy ∈ NP dy (see Proposition B.3). Proposition 14.3.
Conn dy ∈ NP dy . Surprisingly, it is not clear at all whether
Conn dy ∈ coNP dy . As a stepping stone towards theanswer whether Conn dy ∈ P dy , we believe that studying this question might lead to further insight. Question 14.4. Is Conn dy ∈ coNP dy ? Conn dy ∈ coNP dy . These consequenceswill later help us argue why some problems should not be NP dy -hard in a non-straightforward wayin Section 15. Dynamic connectivity (
Conn dy ) and dynamic spanning forest ( SpanningForest dy ) are closely relatedproblems. Given that we can maintain a spanning forest F of a graph G , we will implement thetop tree [AHdLT05] on F . This allows us to count the number of connected components, so thatwe know whether G is connected or not. Moreover, we can answer a connectivity query for any pairof nodes in logarithmic time. Conversely, given that we can maintain if G is connected, it is notclear how to maintain a spanning forest of G . Interestingly, all the previous dynamic connectivityalgorithms actually maintain dynamic spanning forest . It turns out that, when non-determinismis allowed, the two problems are indeed equivalent : Theorem 14.5.
Conn dy ∈ NP dy ∩ coNP dy iff SpanningForest dy ∈ TFNP dy .Proof. First, note that spanning forest exists on any graph. As argue above, it is clear that
SpanningForest dy ∈ TFNP dy implies that Conn dy ∈ NP dy ∩ coNP dy . We will only prove the converse.Suppose that Conn dy ∈ NP dy ∩ coNP dy . We will describe a dynamic nondeterministic algorithmfor dynamic spanning forest. Suppose that we have maintained a graph G t and its spanning forest F t up to time step t . Suppose that F t = { T , . . . , T k t } has k t connected components. For 1 ≤ i ≤ k t ,let u i ∈ T k i be arbitrary node in T k i . We will maintain a graph G ′ t where V ( G ′ t ) = V ( G t ) ∪ { s } and E ( G ′ t ) = E ( G t ) ∪ { ( s, u i ) | ≤ i ≤ k t } . Intuitively, G ′ t is obtained from G t by connecting eachconnected component represented by u i in G to the node s in G ′ t . So G ′ t is connected at every time.Now, given an edge insertion, it is easy to maintain G t F t and G ′ t using link-cut tree [ST81].For edge deletion, it is also easy if the deleted edge e is not in F t . So it now we deal with the casewhere e ∈ F t . Suppose that e ∈ T j and we write T j = T jL ∪ { e } ∪ T jR . That is, deleting e disconnects T j into two parts T jL and T jR . Our goal is to find a replacement edge f = ( u L , u R ) where f = e and u L ∈ T jL and u R ∈ T jR . We have the following key observation: Claim . G ′ t − e is connected iff there is a replacement edges.Therefore, we can call a NP dy ∩ coNP dy -algorithm A for dynamic connectivity on G ′ t . If A returnsthat G ′ t − e is not connected, then we do not need to update G t +1 = G t − e and F t +1 = F t − e .Either T jL or T jR is a new connected component, and so we can update G ′ t +1 accordingly. If A returns that G ′ t − e is connected, then there is a replacement edge f = ( u L , u R ) which a provercan provide us, and, more importantly, we can check if f ∈ E t , f = e , u L ∈ T jL and u R ∈ T jR ,i.e. f is indeed a replacement edge. Then, we can update G t +1 , F t +1 and G ′ t +1 accordingly.To conclude, we can verify quickly that at each step t , F t is indeed a spanning forest. Hence SpanningForest dy ∈ TFNP dy . Conn dy ∈ coNP dy Corollary 14.7 ([HK99]) . Suppose that
Conn dy ∈ coNP dy . Dynamic bipartiteness is NP dy ∩ coNP dy .Moreover, the following problems are in TFNP dy :1. dynamic minimum spanning forest on d distinct-weight graphs where d = polylog( n )
2. dynamic (1 + ǫ )-approximate minimum spanning forest where ǫ > / polylog( n ) , and There is one exception in a more restricted sensitivity setting [PT07]. . dynamic k -edge connectivity certificate where k = polylog( n ) .The number n above is the number of nodes in the underlying graphs.Proof. As we already know that
Conn dy ∈ NP dy , if Conn dy ∈ coNP dy , then SpanningForest dy ∈ TFNP dy by Theorem 14.5. Henzinger and King [HK99] show several reduction from spanning forestto the problems (1)-(4) above.From this, we can infer the “non-hardness” of the above problems in Corollary 14.7 unless PH dy collapses: Corollary 14.8.
Unless PH dy = NP dy ∩ coNP dy , the following two statements cannot hold simul-taneously1. Conn dy ∈ coNP dy .2. Some problem D in the list of Corollary 14.7 is NP dy -hard.Proof. Suppose that
Conn dy ∈ coNP dy . Then, dynamic bipartiteness is in NP dy ∩ coNP dy , and all thesearch problems D from Corollary 14.7 are in TFNP dy . If dynamic bipartiteness is NP dy -hard, then NP dy ⊆ ( P dy ) NP dy ∩ coNP dy = NP dy ∩ coNP dy by Lemma 9.7. If any problem D from Corollary 14.7is NP dy -hard, then NP dy ⊆ ( P dy ) TFNP dy ⊆ NP dy ∩ coNP dy by Proposition 13.7. Both of these implythat PH dy collapses to NP dy ∩ coNP dy by Theorem 9.6.
15 Complexity of dynamic k -edge connecitivty Recall that the dynamic k -edge connectivity problem is to maintain whether a graph is k -edgeconnected. The current best worst-case upper bound for this problem is ˜ O ( √ n ) by a deterministicalgorithm [Tho01]. For k ≤
3, the problem has been extensively studied in the amortized updatetime setting (see [HdLT98] for the history). However, for k ≥
4, there is no better algorithms withamortized update time. It remains open whether the ˜ O ( √ n )-bound can be improved.It turns out that non-determinism and randomization can significantly speed up the updatetime to polylogarithmic worst-case. More precisely, we show that the following: Theorem 15.1. k - EdgeConn dy ∈ AM dy ∩ coAM dy . Moreover, randomization can be removed if
Conn dy ∈ coNP dy : Theorem 15.2. If Conn dy ∈ coNP dy , then k - EdgeConn dy ∈ NP dy ∩ coNP dy . dy to k - EdgeConn dy Before proving the above theorems, we discuss that this shows some “evidences” that dynamic k -edge connectivity should not be NP dy -hard. This also implies some interesting consequences tothe fine-grained complexity of k - EdgeConn dy .Theorem 15.2 adds another problem to the list of Corollary 14.7. Using the same proof asCorollary 14.8, we have Corollary 15.3.
Unless PH dy = NP dy ∩ coNP dy , either Conn dy ∈ coNP dy or k - EdgeConn dy is not NP dy -hard. A k -connectivity certificate H of G is a subgraph of G where 1) H has O ( kn ) edges, and 2) H is k -edge connectediff G is k -edge connected. k - EdgeConn dy should not be NP dy -hard is relatively more interesting thanother problems in Corollary 14.7. This is because the problems in Corollary 14.7 already admits veryfast algorithms. Using the dynamic spanning forest algorithm against an oblivious adversary from[KKM13], there are Monte Carlo algorithms against an oblivious adversary with polylogarithmicworst-case update time. Using [NSW17], there are Las Vegas algorithms against adaptive adver-saries with sub-polynomial worst-case update update time for these problems. On the contrary, thecurrent best update time for k - EdgeConn dy is still ˜ O ( √ n ) by [Tho01].We can also show a similar theorems without the condition whether Conn dy ∈ coNP dy : Corollary 15.4.
Unless PH dy ⊆ AM dy ∩ coAM dy , then k - EdgeConn dy is not NP dy -hard.Proof. If k - EdgeConn dy is NP dy -hard, then Theorem 15.1 implies that NP dy ⊆ ( P dy ) AM dy ∩ coAM dy = AM dy ∩ coAM dy by Proposition 12.4. This implies that PH dy ⊆ AM dy ∩ coAM dy by Proposition 12.5. Consequences to fine-grained complexity of k - EdgeConn dy . Suppose that we believe that k - EdgeConn dy is not NP dy -hard. By definition, this means that there is no reduction from DNF dy to (many instances of) k - EdgeConn dy with any polynomial-size blow up . This is a useful informationabout fine-grained complexity of k - EdgeConn dy because of the following reason.Without this information, it is conceivable that there might exist a reduction from DNF dy to k - EdgeConn dy where the size of the instance is blown up by a quadratic factor. That is, we canreduce DNF dy with m clauses to k - EdgeConn dy with m nodes. By SETH, this would immediatelyimply that there is a tight lower bound of Ω( n . − o (1) ) for k - EdgeConn dy . In this section, we showthat such reduction is unlikely assuming that PH dy AM dy ∩ coAM dy for example. First, we prove the easy part of the above theorems:
Lemma 15.5. k - EdgeConn dy ∈ ( coNP dy ) Conn dy . In particular, k - EdgeConn dy ∈ coAM dy and k - EdgeConn dy ∈ Π dy . Moreover, if Conn dy ∈ coNP dy , then k - EdgeConn dy ∈ coNP dy .Proof sketch. At any time step, if a graph G = ( V, E ) is not k connected, then a prover send the cutset C ⊂ E of size less than k to a verifier. Then, a verifier can try deleting edges from C and see if G − C is connected using the dynamic connectivity oracle. That is, we can verify quickly that G is not k -edge connected. As Conn dy ∈ BPP dy , k - EdgeConn dy ∈ coAM dy by Proposition 12.3. As Conn dy ∈ NP dy , k - EdgeConn dy ∈ Π dy by definition. If Conn dy ∈ coNP dy , then Conn dy ∈ NP dy ∩ coNP dy byProposition 14.3. We we have k - EdgeConn dy ∈ ( coNP dy ) NP dy ∩ coNP dy = coNP dy by Lemma 9.5.It remains to prove the following: Lemma 15.6. k - EdgeConn dy ∈ AM dy . Moreover, if Conn dy ∈ coNP dy , then k - EdgeConn dy ∈ NP dy . The proof is based on the previous algorithm by Thorup [Tho01]. There are three parts. Thefirst part is to review Thorup’s algorithm. The second part is to show that the above lemma followsif we can show that another dynamic problem called k - MinTreeCut dy is in NP dy . The last part is toshow that the YES-instance of k - MinTreeCut dy can be verified quickly, i.e. k - MinTreeCut dy ∈ NP dy .This is done by adjusting the algorithm of [Tho01] and use non-determinism to bypass the ˜ O ( √ n )-time bottleneck in [Tho01]. 54 k - MinTreeCut dy Before, we define the problem k - MinTreeCut dy formally, we need the following definitions. Let G = ( V, E ), F ⊆ E and e ∈ F . Suppose T ∋ e is the connected component of F . Write T = T L ∪ { e } ∪ T R . The cover number of e is cover ( G,F ) ( e ) = |{ ( u, v ) ∈ E | u ∈ T L and v ∈ T R }| . Definition 15.7 ( k - MinTreeCut dy ) . In the dynamic problem called dynamic k min tree cut problem( k - MinTreeCut dy ), we have to maintain a data structure on a graph G and a forest F that can handlethe following operations:1. Update edge insertion or deletions in G .2. Update edge insertion or deletions in F as long as F ⊆ G and F is a forest.After each update, we must return whether all edges e ∈ F have cover number at least k , i.e.min e ∈ F cover ( G,F ) ( e ) ≥ k . △ Now, we review how Thorup reduces k - EdgeConn dy to k - MinTreeCut dy . The main goal in [Tho01]is to show a dynamic randomized algorithm for solving (1 + ǫ )-approximate mincut with ˜ O ( √ n )worst-case update time on a graph with n nodes. To do this, Thorup first shows a randomized re-duction based on Karger’s sampling [Kar99] from (1+ ǫ )-approximate mincut to k -edge connectivityproblem where k = O (log n ). This is the only randomized part in his algorithm. Then, he actu-ally gives a dynamic deterministic algorithm for k - EdgeConn dy with O (poly( k log n ) √ n ) worst-caseupdate time.To solve k - EdgeConn dy , he maintains the greedy tree packing with d = poly( k log n ) many forests.The greedy tree packing T = { F , . . . , F d } on G is a collections of forests in G , i.e. F i ⊆ G . If G isconnected, then each F i is a tree. We omit the precise definition here. We only need to state thecrucial property of the greedy tree packing T as follows: Theorem 15.8 ([Tho01]) . For some d = poly( k log n ) , let T be a greedy tree packing on G with d forests. G is k -edge connected iff G is connected and min e ∈ F cover ( G,F ) ( e ) ≥ k for all F ∈ T . That is, given a greedy tree packing, k - EdgeConn dy is reduced to the “AND” of many instancesof k - MinTreeCut dy . Another property that about maintaining a greedy tree packing is the following: Lemma 15.9 ([TK00, Tho01]) . For any d = polylog n , dynamic greedy tree packing with d forestsis P dy -reducible to dynamic minimum spanning forests on graph with d -distinct edge weights. Recall that the above two problems in Lemma 15.9 are search problems. But the statementmakes sense by the discussion in Section 13.
We will show later in the next section that k - MinTreeCut dy ∈ NP dy in Theorem 15.10. Given this,our goal in this section is to prove Lemma 15.6 which implies the main results (Theorem 15.1 andTheorem 15.2). 55 roof: if Conn dy ∈ coNP dy , then k - EdgeConn dy ∈ NP dy . Here, we assume that
Conn dy ∈ coNP dy .By Corollary 14.7 and Lemma 15.9, there are TFNP dy -algorithms for dynamic minimum spanningforests on graph with d -distinct edge weights where d = polylog( n ), and for dynamic greedy treepacking T with d forests, respectively.From Theorem 15.8, we need to verify if min e ∈ F cover ( G,F ) ( e ) ≥ k for every F ∈ T . Let usdenote this problem (i.e. the “AND” of d = polylog( n ) many instances of k - MinTreeCut dy ) by “ d -AND- k - MinTreeCut dy ”. The key observation is that, if k - MinTreeCut dy ∈ NP dy by Theorem 15.10 aswill be shown in Theorem 15.10, then d -AND- k - MinTreeCut dy is also in NP dy . This is true simplyby running d instances of the verifiers for k - MinTreeCut dy .Theorem 15.8 implies that, to solve k - EdgeConn dy , it is enough to maintain a dynamic greedytree packing T with d forests, feed the graph and d forests from T as an input to the subroutinefor d -AND- k - MinTreeCut dy , and then return the same YES/NO answer returned by the subrouinefor d -AND- k - MinTreeCut dy , in every step.We claim that k - EdgeConn dy ∈ NP dy . This is because, first, T can be maintained by a TFNP dy -algorithm so the prover only needs to provide the proof so that T is correctly maintained. Second,assuming that T is correctly maintained at every step, given a sequence of reward-maximizingproof sequence for d -AND- k - MinTreeCut dy , the verifier algorithm must return a correct answer for d -AND- k - MinTreeCut dy (as d -AND- k - MinTreeCut dy ∈ NP dy ). This answer is the same as the onefor k - EdgeConn dy , so we obtain an NP dy -algorithm for k - EdgeConn dy . Proof: k - EdgeConn dy ∈ AM dy . The proof goes in almost the same way as above. However, weneed to be more very careful about the notion of oblivious adversary.We first claim that dynamic greedy tree packing T with d = polylog( n ) forests can be maintainedin polylog( n ) time against an oblivious adversary . This is true by observing that the algorithm byKapron, King and Mountjoy [KKM13] which shows Conn dy ∈ BPP dy can actually maintain dynamicspanning forest against oblivious adversary . This result further can be extended to maintainingdynamic minimum spanning forest against oblivious adversary with d -distinct edge weights where d = polylog( n ) . As greedy tree packing is reducible to d -weight minimum spanning forest byLemma 15.9, we obtain the claim.As before, Theorem 15.8 implies that, to solve k - EdgeConn dy , it is enough to maintain a dynamicgreedy tree packing T with d forests, feed the graph and d forests from T as an input to thesubroutine for d -AND- k - MinTreeCut dy , and then return the same YES/NO answer returned by thesubrouine for d -AND- k - MinTreeCut dy , in every step.We claim that, in this algorithm, the greedy tree packing T can be correctly maintained withhigh probability although the algorithm by [KKM13] only works against an oblivious adversary.This is because (1) the update sequence generated to the dynamic greedy tree packing algorithmonly comes from the adversary. (We do not adaptively generate more updates that depend onthe answer of the subroutine for d -AND- k - MinTreeCut dy .) and (2) assuming that the algorithmhas been returning only correct YES/NO answers, the adversary learns nothing about the internalrandom choices of the algorithm. So adaptive adversaries do not have more power than obliviousadversaries against this algorithm. So, we have that T is correctly maintained with high probability.Next, we claim that k - EdgeConn dy ∈ AM dy . This is because, we can assume with high probabilitythat T is correctly maintained at every step. So given a sequence of reward-maximizing proofsequence for d -AND- k - MinTreeCut dy , the verifier algorithm must return a correct answer for d - We note that this cannot be obtain using the reduction in [HK99] because the reduction require the algorithmto work against adaptive adversary. Nevertheless, the algorithm in [KKM13] can be directed extend to d -weightminimum spanning forest (roughly by leveling edges of each weight in the increasing order and finding a spanningforest on a graph with small weight first). k - MinTreeCut dy (as d -AND- k - MinTreeCut dy ∈ NP dy ). This answer is the same as the one for k - EdgeConn dy , so we obtain an AM dy -algorithm for k - EdgeConn dy . k - MinTreeCut dy ∈ NP dy In this section, it remains to prove the following:
Theorem 15.10. k - MinTreeCut dy ∈ NP dy . We essentially use the algorithm in [Tho01], so we will refer most of the definitions to [Tho01].We will only point out which part of the algorithm can be speed up using non-determinism.
Review of [Tho01].
Let G = ( V, E ) and F ⊆ E be a graph and a forest we are maintaining. Wewill implement top tree on F . The top tree [AHdLT05] hierarchically decomposes F into clusters .Each cluster C is a connected subgraph of F . Clusters are edge disjoint. The hierarchy of clustersforms a binary tree where the root cluster C corresponds to F itself. For each cluster C , thereare two children C L and C R which partitions edge of C into two connected parts. The importantproperty of clusters is such that each cluster C share at most 2 nodes with other clusters which arenot descendant of C . Hence, there are two types of clusters: path clusters (i.e. ones that share twonodes) and point clusters (i.e. ones that share one node). For a path cluster C , let a and b be thetwo nodes which C share with other non-descendant clusters. We call a and b boundary nodes of C . Let π = ( a, . . . , b ) ⊆ C be a path connecting a and b . We call π a cluster path of C .After each update to G and F , there is an algorithm which dictates how the hierarchy of top treeclusters should change so that the binary tree corresponds to the hierarchy has logarithmic depth.This is done by “joining” and “destroying” O (log n ) clusters. This normally implies efficiency exceptthat we will also maintain some information on each cluster. Then, the problem reduces to how toobtain such information when a new cluster C is created from joining its two children clusters C L and C R .Recall that our goal is to know if min e ∈ F cover ( G,F ) ( e ) ≥ k . So it suffices if for each edge e ∈ F we implicitly maintain cover ′ ( G,F ) ( e ) = min { cover ( G,F ) ( e ) , k } such that the implicit representationallows us to min e ∈ F cover ′ ( G,F ) ( e ), because min e ∈ F cover ′ ( G,F ) ( e ) ≥ k iff min e ∈ F cover ( G,F ) ( e ) ≥ k .Towards this implicit representation, we do the following. For every cluster C , we implicitlymaintain a local cover number w.r.t. C (using top tree as well). This local cover numbers aresuch that, for a root cluster C , the local cover number w.r.t. C of e denoted by lcover ( G,C ) ( e ) = cover ′ ( G,F ) ( e ). Moreover, we can obtain min e ∈ C lcover ( G,C ) ( e ) = min e ∈ F cover ′ ( G,F ) ( e ) when C isthe root cluster. Therefore, the main task is how to maintain local cover numbers on each cluster.As described above, to maintain information on each cluster, the only task which is non-trivialis when we create a new cluster C from two clusters C L and C R where C = C L ∪ C R . It turnsout that if C L and C R are point clusters, then we need to do nothing. We only need to describewhat to do when C L and/or C R are path clusters. We only describe the case for C R because it issymmetric for C L . Let π = ( a , . . . , a s ) be a path cluster of C R where C L and C R share a node at a . Let E ′ ( C L , C R ) = { ( u L , u R ) ∈ E − F | u L ∈ C L , u R ∈ C R } be the set of non-tree edges whoseendpoints are in C L and C R .We will state without proof that what we need is just to find k non-tree edges from E ′ ( C L , C R )that “cover” π as much as possible (see [Tho01] for the argument). More precisely, for e ∈ E ′ ( C L , C R ), let P e ⊆ F be the path in F connecting two endpoints of e . Observe that P e ∩ π =( a , . . . , a s e ) for some 0 ≤ s e ≤ s . We want to find different k edges e from E ′ ( C L , C R ) whose s e isas large as possible. This is the bottleneck in [Tho01]. To do this, Thorup builds a data structurebased on the 2-dimensional topology tree of Frederickson [Fre85, Fre97] for obtaining these edges.This results in ˜ O ( √ n ) update time. 57 sing proofs from the prover. We will let the prover give us such k non-tree edges from E ′ ( C L , C R ) that “cover” π as much as possible. Given an edge e , the verifier can check if e ∈ E ′ ( C L , C R ) and can compute the value s e . Then, the verifier will use s e as a “reward” for theprover (see Definition 7.3). Therefore, the “honest” prover will always try to maximize s e .To summarize, after each update, there will be O (log n ) many new clusters created. For eachcluster, the verifier needs O ( k ) edges maximizing the rewards as defined above. By, for example,concatenating the rewards of all O ( k log n ) into one number, the prover can provide these O ( k log n )edges as we desired. If at any time min e ∈ F cover ( G,F ) ( e ) ≥ k , with the help from the prover, theverifier can indeed verify that min e ∈ F cover ( G,F ) ( e ) ≥ k . Hence, k - MinTreeCut dy ∈ NP dy . Part VI
Back to RAM
16 Completeness of DNF dy in RAM with Large Space The main goal of this section to prove that NP dy -completeness of DNF dy also holds in the word-RAM model in the setting when we allow quasipolynomial preprocessing time and space. Giventhis main result and that other results never exploit that non-uniformity of the bit-probe model,we conclude that other results in this paper also transfer to the word-RAM model. The word-RAM model and complexity classes.
The standard assumption for most algo-rithms in the word-RAM model is that an index to a cell in memory fits a word. More precisely, ifthe space is s , then the word size w ≥ lg s . We will also adopt this assumption here. So when thespace can be s = 2 polylog( n ) , the word size is w = polylog( n ). Recall that the cost in the word-RAMmodel is the number of “standard” operations (e.g. addition, multiplication) on words. From now,we just say the RAM model.For each complexity classes formally defined in the bit-probe model in this paper, we can easilydefine an analogous class in the word-RAM model. The only difference is just that there is aconcept of preprocessing time in the word-RAM model. When given a size- n instance, we allowalgorithms to use 2 polylog( n ) preprocessing time and space. To be more precise, we denote the class P dy in the RAM model as RAM − P dy and we do similarly for other classes. As it will be verytedious to formally rewrite all the definitions of complexity classes in the word-RAM model again,we will omit it in this version.The main result we prove this in section is as follows: Theorem 16.1.
If DNF dy ∈ RAM − P dy , then RAM − P dy = RAM − NP dy . Theorem 16.1 implies that, unless
RAM − P dy = RAM − NP dy , there is no algorithm for DNF dy with polylogarithmic update time, even when allowing quasi-polynomial space and preprocessingtime.It is clear that DNF dy ∈ RAM − NP dy . So, this will imply that DNF dy is a RAM − NP dy -complete in the sense of Definition 8.3. Why one might believe that
RAM − P dy = RAM − NP dy . Consider the following problem: Weare given a graph G with n nodes undergoing edge insertions and deletions. Then, after each update,check if G contains a clique of size at least s = polylog( n ) in G . This problem is in RAM − NP dy . In the preprocessing step, we check all set of nodes of size s trivially if they form a clique. In each update step,given a proof indicating the set S of nodes forming a clique, we can check in polylogarithmic time of S is indeed a RAM − NP dy = RAM − P dy , then this implies that we can do the following. Given an empty graph with n nodes, we preprocess this empty graph using quasi-polynomial space and time. Then,given an online sequence of arbitrary graphs G , G , . . . with n nodes and m edges, we can decideif G i contains a clique of size polylog( n ) in O (polylog( n )) time before G i +1 arrives. This meansthat, by spending quasi-polynomial time and space for preprocessing even without any knowledgeabout a graph, we can then solve the polylog( n )-clique problem in near-linear time, which wouldbe quite surprising. Moreover, the same kind of argument applies to every problem in RAM − NP dy The rest of this section is for proving Theorem 16.1. The proof is essentially the same as the NP dy -completeness of DNF dy in the bit-probe model. This is because all the reduction are readilyextended to the relaxed RAM model. Suppose that DNF dy ∈ RAM − P dy . There is an RAM − P dy -algorithm O for the First ShallowDecision Tree (First-DT dy ) problem (recall the definition from Section 10.1). This is because allreductions in Section 11.1 extend to the RAM model,Let D ∈
RAM − NP dy . Let V be the RAM − NP dy -verifier for D . We want to devise a RAM − P dy -algorithm A for D using V and O . The idea from now is just to repeat the reduction which showsthat First-DT dy is Probe − NP dy -hard Section 10.2. Most of the reductions are readily extended tothe RAM model. We only need to bound the preprocessing time (which gives the bound for space).We describe how A works as follows. Preprocessing.
In the preprocessing step, given an n -size initial instance I of the problem D . A calls V to preprocess I which takes time 2 polylog( n ) . We know that when V is given an input ineach update step from now, V takes time at most polylog( n ). Hence, the process how V works canbe described with the decision tree T V . By simulating V step by step and by branching whenever V reads any cell in the memory, A can construct the decision tree T V in time 2 polylog( n ) (as T V cancontains at most 2 polylog( n ) nodes). Now, we construct the collection T V = { T π | π ∈ { , } polylog( n ) } where T π is obtained from T V by “fixing” the proof-update cell in mem V to be π . See Section 10.3.2for the same construction.Next, we basically follow the same process as in Section 10.3.3. That is, we construct an initialinstance I ′ of the problem First-DT dy (recall the definition of First-DT dy from Section 10.1) forfeeding to the algorithm O . The approach is the same as in Section 10.3.2. Let I ′ = ( mem I ′ , T I ′ )where mem I ′ = mem V (0) is the memory state of V after V preprocess I , and T I ′ = T V . Observe thatthe instance I ′ can be described using 2 polylog( n ) bits, and I can be constructed in 2 polylog( n ) timeas well. Now, we call O to preprocess I ′ . This takes time 2 polylog(2 polylog( n ) ) = 2 polylog( n ) . In totalthe preprocessing time is 2 polylog( n ) . Update.
We follow the same process as in Section 10.3.3. That is, at step t , given an instance-update u ( I t − I t ) of the problem D , A does the followings:1. A write u ( I t − I t ) in the input-memory part mem in V of V .2. A calls O | u ( I t − I t ) | = λ D ( n ) many times to update mem I ′ = mem V .3. O returns the guaranteed proof-update π t .4. A calls V with input ( u ( I t − I t ) , π t ).5. A calls O polylog( n ) many times to update mem I ′ = mem V . clique. This show that the problem is in RAM − NP dy . A outputs the same output as V ’s.As V always get the guaranteed proof-update, the answer from V is correct and so is A . Now, weanalyze the update time. Each call to O takespolylog(2 polylog( n ) )) = polylog( n ) . The total number of calls is polylog( n ). So the total time is polylog( n ). Other operations aresubsumed by this.To conclude, A takes 2 polylog( n ) preprocessing time and polylog( n ) update time, and A re-turns a correct answer for D at every step. Therefore, D ∈
RAM − P dy . Hence, RAM − P dy = RAM − NP dy . Acknowledgement
Nanongkai and Saranurak thank Thore Husfeldt for bringing [HR03] to their attention.This project has received funding from the European Research Council (ERC) under the Euro-pean Union’s Horizon 2020 research and innovation programme under grant agreement No 715672.Nanongkai and Saranurak were also partially supported by the Swedish Research Council (Reg. No.2015-04659.)
Part VII
Appendices
A Extending our results to promise problems
For simplicity of exposition, we focussed on decision problems throughout this paper. However,almost all the results derived in this paper – except the ones from Section 9.1 – hold for decisionproblems with a promise as well. The results from Section 9.1 don’t hold for promise problemsbecause of the following reason: Lemma 9.5 about NP dy ∩ coNP dy does not hold for promise problems(see Appendix D for deeper explanation). For the similar reason, we have that Propositions 12.4and 12.5 do not hold for promise problems. B Proofs: putting problems into classes
B.1
BPP dy Proposition B.1.
Dynamic connectivity is in
BPP dy .Proof. The goal of this proof is give only to verify that the algorithm by Kapron, King and Mountjoy[KKM13] really satisfies our definition of
BPP dy in Definition 12.1. To point out some smalldifference, from Definition 12.1, the algorithm needs to handle infinite sequence of updates, andthe algorithm should be correct on each update with good probability. However, from [KKM13],there is an dynamic connectivity algorithm such that on n -node graphs, the algorithm is correct on A decision problem with a promise allows for “don’t care” instances, in addition to YES and NO instances. Analgorithm for such a problem is allowed to provide any arbitrary output on don’t care instances. ll of the first poly( n ) updates with 1 − /n c for any constant c . This algorithm has ˜ O ( cm ) = ˜ O ( cn )preprocessing time if the initial graph has m edges and has ˜ O (1) update time.Because the preprocessing time is small enough, we can apply a standard technique of peri-odically rebuilding a data structure every, say ˜ O ( n ) updates. The work for rebuilding the datastructure is spread over man updates so that the worst-case update time is still ˜ O (1). For eachperiod, the answer of all updates is correct with high probability. Hence, each answer is trivially cor-rect high probability as well. This holds for every period. So this shows that dynamic connectivityis in BPP dy . Proposition B.2.
For any constant ǫ > , there is a dynamic algorithm that can maintain the (2 + ǫ ) -approximate size of min cut in an unweighted graph with n nodes undergoing edge insertionsand deletions with O ( polylog ( n )) update time. The answer at each step is correct with probabilityat least − / poly ( n ) . In particular, the dynamic (2 + ǫ ) -approximate mincut problem is in BPP dy .Proof sketch. In [TK00], Karger and Thorup shows that a dynamic algorithm for (2+ ǫ )-approximatingsize of min cut with polylogarithmic worst-case update time can be reduced to a dynamic algorithmfor maintaining a minimum spanning forest in a graph of size n where the weight of each edge isbetween 1 and polylog n . Kapron, King and Mountjoy [KKM13] show an algorithm for this prob-lem with polylogarithmic worst-case time and their algorithm is correct at each step with highprobability . B.2 NP dy In this section, we show that all the problems listed in Table 2 belong to the class NP dy . Inparticular, we need to show that all these problems admit verifier-families satisfying Definition 7.3.We say that the proof at each step is given to the verifier from a prover P . We think of ourselvesas a verifier V . At each step t , the prover gives us some proof π t . At some step, the prover alsoclaims that the current instance is an YES instance, and must convince us that it is indeed is thecase. Proposition B.3.
The dynamic graph connectivity problem is in NP dy .Proof. (Sketch) This problem is in NP dy due to the following verifier. After every update, theverifier with proofs from the prover maintain a forest F of G . A proof (given after each update) isan edge insertion to F or an ⊥ symbol indicating that there is no update to F . It handles eachupdate as follows. • After an edge e is inserted into G , the verifier checks if e can be inserted into F withoutcreating a cycle. This can be done in O (polylog( n )) time using a link/cut tree data structure[ST81]. It outputs reward y = 0. (No proof from the prover is needed in this case.) • After an edge e is deleted from G , the verifier checks if F contains e . If not, it outputs reward y = 0 (no proof from the prover is needed in this case). If e is in F , the verifier reads theproof (given after e is deleted). If the proof is ⊥ it outputs reward y = 0. Otherwise, let theproof be an edge e ′ . The verifier checks if F ′ = F \ { e } ∪ { e ′ } is a forest; this can be done in O (polylog( n )) time using a link/cut tree data structure [ST81]. If F ′ is a forest, the verifiersets F ← F ′ and outputs reward y = 1; otherwise, it outputs reward y = − In [KKM13], they only claim that their algorithm can handle polynomial many updates. But this restriction canbe easily avoided using a standard trick of periodically rebuilding the data structure. u, v in G as part of a query, it outputs x = Y ES if and onlyif both u and v belong to the same component of F (this again can be checked in O (polylog( n ))time using a link/cut tree data structure).Observe that if the prover gives a proof that maximizes the reward after every update, the forest F will always be a spanning forest (since inserting an edge e ′ to F has higher reward than giving ⊥ as a proof). Thus, the verifier will always output x = Y ES for YES-instances in this case. Itis not hard to see that the verifier never outputs x = Y ES for NO-instances, no matter what theprover does.
Proposition B.4.
The dynamic problem of maintaining a (1 + ǫ ) -approximation to the size of themaximum matching is in NP dy .Proof. (Sketch) We consider the gap version of the problem. Let G be an n -node graph undergoingedge updates and k be some number. Let opt ( G ) be the size of maximum matching in G . Wesay that G is an YES-instance if opt ( G ) > (1 + ǫ ) k , G is a NO-instance if opt ( G ) ≤ k . Supposethat M is a matching of size | M | ≤ k . If opt ( G ) > (1 + ǫ ) k , then it is well-known that there is anaugmenting path P for G of length at most O (1 /ǫ ). Now, our verifier for this problem works asfollows.In the preprocessing step, we compute the maximum matching M of G . After each edge update,the prover gives us an augmenting path P to M of length O (1 /ǫ ). We (as the verifier) can check if P is indeed an augmenting path for M of length O (1 /ǫ ) in O (1 /ǫ ) time. Then we do the following. • If P is not an augmenting path, then we output a reward y = − • Else if P is a valid and nontrivial augmenting path of length O (1 /ǫ ), then we output a reward y = 1. In this scenario, we also augment P to M in O (1 /ǫ ) time. • Otherwise, if P is a valid but trivial augmenting path (i.e., it has length zero), then we outputa reward y = 0.At this point, we check the size of our matching M . If we find that | M | > k , then we (as a verifier)output x = 1 (YES). Otherwise, we output x = 0 (NO).If opt ( G ) ≤ k , then obviously we always output x = N O . If opt ( G ) > (1 + ǫ ) k and the prover sofar has always maximized the reward y (by giving us an augmenting path of length O (1 /ǫ ) wheneverit is possible), then it must follow that | M | > k and so we will output x = Y ES . This shows thatthis problem is in NP dy . Proposition B.5.
The following dynamic problems from Table 2 are in NP dy (and hence in NP dy ) • subgraph detection, • uM v , • • planar nearest neighbor. • Erickson’s problem. • Langerman’s problem. 62 roof. (Sketch) The general idea for showing that the above problems are in NP dy is as follows. Theprover is supposed to say nothing when the current instance is a NO-instance. On the other hand,when the current instance is a YES-instance, the prover is supposed to give the whole certificate(which is small) and we quickly verify the validity of this certificate. (subgraph detection) : This problem is in NP dy due to the following verifier: the verifier outputs x = Y ES if and only if the proof (given after each update) is a mapping of the edges in H tothe edges in a subgraph of G that is isomorphic to H . With output x = Y ES , the verifier givesreward y = 1. With output x = N O , the verifier gives reward y = 0. Observe that the proof isof polylogarithmic size (since | V ( H ) | = polylog( | V ( G ) | )), and the verifier can calculate its outputs( x, y ) in polylogarithmic time. Observe further that: • (1) If the current input instance is a YES-instance, then the reward-maximizing proof is amapping between H and the subgraph of G isomorphic to H , causing the verifier to output x = Y ES . • (2) If the current input instance is a NO-instance, then no proof will make the verifier output x = Y ES .The arguments for the problems below are exactly similar to the one sketched above. ( uM V ): After each update, if uM v = 1, then the prover gives indices i and j and we check if u i · M ij · v j = 1. (3SUM): After each update, if there are a, b, c ∈ S where a + b = c , then the prover just givespointers to a, b and c . (planar nearest neighbor): We consider the decision version of the problem. Given a queriedpoint q , we ask if there is a point p where d ( p, q ) ≤ k for some k . After each update, if there is apoint p where d ( p, q ) ≤ k , then the prover gives the point p to us and we verified that d ( p, q ) ≤ k in O (1) time. (Erickson’s problem): We consider the decision version of the problem. Given a queried value α , we need to answer if there is an entry in the matrix with value at least α . After each update, ifthere is an entry in the matrix with value at least α , then the prover simply points to that entryin the matrix. (Langerman’s problem): After each update, if there is an index k such that P ki =1 A [ i ] = 0, thenthe prover simply points to such an index k . We (as the verifier) then check if it is indeed the casethat P ki =1 A [ i ] = 0. This takes O (log n ) time if we use a standard balanced tree data structurebuilt on top of the array A [1 . . . n ]. B.3 PH dy In this section, we show that all the problems listed in Table 4 are in PH dy . Small dominating set:
An instance of this problem consists of an undirected graph G = ( V, E )on | V | = N nodes (say) and a parameter k = O (polylog( N )). It is an YES instance if thereis a dominating set S ⊆ V in G that is of size at most k , and a NO instance otherwise. Aninstance-update consists of inserting/deleting an edge in the graph G . Lemma B.6.
The small dominating set problem described above is in the complexity class Σ dy .Proof. (Sketch) We first define a new dynamic problem D as follows.63 D : An instance of D is an ordered pair ( G, S ), where G = ( V, E ) is a graph on | V | = N nodes (say), and S ⊆ V is a subset of nodes in G of size at most k (i.e., | S | ≤ k ). It is anYES instance if S is not a dominating set of G , and a NO instance otherwise. An instance-update consists of either inserting/deleting an edge in G , or inserting/deleting a node in S (the node-set V remains unchanged).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes a node v ∈ V . Upon receiving this proof, theverifier checks whether or not v / ∈ S and it also goes through all the neighbors of v in S (which takes O (polylog( N )) time since | S | ≤ k = O (polylog( N )). The verifier then outputs( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff v / ∈ S and v does not have any neighbor in S (YES instance).The lemma holds since the small dominating set problem is in NP dy if the verifier V ′ (for small dom-inating set) can use an oracle for the problem D . We now explain why this is the case. Specifically,a proof π ′ t for V ′ encodes a subset S ⊆ V of nodes in G of size at most k . Since k = O (polylog( N )),the proof π ′ t also requires only O (polylog( N )) bits. Upon receiving this proof π ′ t , the verifier V ′ calls the oracle for D on the instance ( G, S ). If this oracle returns a NO answer, then we know that S is a dominating set in G , and the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 1. Otherwise, if theoracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 0. Small vertex cover:
An instance of this problem consists of an undirected graph G = ( V, E ) on | V | = N nodes (say) and a parameter k = O (polylog( N )). It is an YES instance if there is a vertexcover S ⊆ V in G that is of size at most k , and a NO instance otherwise. An instance-updateconsists of inserting/deleting an edge in the graph G . Lemma B.7.
The small vertex cover problem described above is in the complexity class Σ dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is an ordered pair ( G, S ), where G = ( V, E ) is a graph on | V | = N nodes(say), and S ⊆ V is a subset of nodes in G . It is an YES instance if S is not a vertex coverof G , and a NO instance otherwise. An instance-update consists of either inserting/deletingan edge in G , or inserting/deleting a node in S (the node-set V remains unchanged).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes an edge ( v, u ) ∈ E . The verifier outputs ( x t , y t )where x t = y t ∈ { , } , and x t = 1 iff u / ∈ S and v / ∈ S (YES instance).The lemma holds since the small vertex cover problem is in NP dy if the verifier V ′ (for small vertexcover) can use an oracle for the problem D . We now explain why this is the case. Specifically, aproof π ′ t for V ′ encodes a subset S ⊆ V of nodes in G of size at most k . Since k = O (polylog( N )),the proof π ′ t also requires only O (polylog( N )) bits. Upon receiving this proof π ′ t , the verifier V ′ calls the oracle for D on the instance ( G, S ). If this oracle returns a NO answer, then we know that S is a vertex cover in G , and the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 1. Otherwise, if theoracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 0. Small maximal independent set:
An instance of this problem consists of an undirected graph G = ( V, E ) on | V | = N nodes (say) and a parameter k = O (polylog( N )). It is an YES instanceif there is a maximal independent set S ⊆ V in G that is of size at most k , and a NO instanceotherwise. An instance-update consists of inserting/deleting an edge in the graph G .64 emma B.8. The small maximal independent set problem described above is in the complexityclass Σ dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is an ordered pair ( G, S ), where G = ( V, E ) is a graph on | V | = N nodes (say), and S ⊆ V is a subset of nodes in G of size at most k (i.e., | S | ≤ k ). It isan YES instance if S is not a maximal independent set of G , and a NO instance otherwise.An instance-update consists of either inserting/deleting an edge in G , or inserting/deleting anode in S (the node-set V remains unchanged).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes a node v ∈ V . Upon receiving this proof, theverifier checks whether or not v / ∈ S and it also goes through all the neighbors of v in S (which takes O (polylog( N )) time since | S | ≤ k = O (polylog( N )). The verifier then outputs( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff either (1) v / ∈ S and v does not have anyneighbor in S or (2) v ∈ S and v has at least one neighbor in S (YES instance).The lemma holds since the small maximal independent set problem is in NP dy if the verifier V ′ (forsmall maximal independet set) can use an oracle for the problem D . We now explain why this isthe case. Specifically, a proof π ′ t for V ′ encodes a subset S ⊆ V of nodes in G of size at most k .Since k = O (polylog( N )), the proof π ′ t also requires only O (polylog( N )) bits. Upon receiving thisproof π ′ t , the verifier V ′ calls the oracle for D on the instance ( G, S ). If this oracle returns a NOanswer, then we know that S is a maximal independent set in G of size at most k , and the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 1. Otherwise, if the oracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 0. Small maximal matching:
An instance of this problem consists of an undirected graph G =( V, E ) on | V | = N nodes (say) and a parameter k = O (polylog( N )). It is an YES instance if thereis a maximal matching M ⊆ E in G that is of size at most k (i.e., | M | ≤ k ), and a NO instanceotherwise. An instance-update consists of inserting/deleting an edge in the graph G . Lemma B.9.
The small maximal matching problem described above is in the complexity class Σ dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is an ordered pair ( G, M ), where G = ( V, E ) is a graph on | V | = N nodes (say), and M ⊆ E is a matching in G of size at most k (i.e., | M | ≤ k ). It is an YESinstance if M is not a maximal matching in G , and a NO instance otherwise. An instance-update consists of either inserting/deleting an edge in G , or inserting/deleting an edge in M (ensuring that M remains a matching in G ).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes an edge ( u, v ) ∈ E . Upon receiving this proof,the verifier checks whether or not both the endpoints u, v are currently matched in M . Theverifier then outputs ( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff both u, v are currentlyunmatched in M (YES instance).The lemma holds since the small maximal matching problem is in NP dy if the verifier V ′ (forsmall maximal matching) can use an oracle for the problem D . We now explain why this is thecase. Specifically, a proof π ′ t for V ′ encodes a matching M ⊆ E in G of size at most k . Since k = O (polylog( N )), the proof π ′ t also requires only O (polylog( N )) bits. Upon receiving this proof65 ′ t , the verifier V ′ calls the oracle for D on the instance ( G, M ). If this oracle returns a NO answer,then we know that M is a maximal matching in G of size at most k , and the verifier V ′ outputs( x ′ t , y ′ t ) where x ′ t = y ′ t = 1. Otherwise, if the oracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t )where x ′ t = y ′ t = 0. Chan’s subset union problem:
An instance of this problem consists of a collection of sets X , . . . , X n from universe [ m ], and a set S ⊆ [ n ]. It is an YES instance if ∪ i ∈ S X i = [ m ], and a NOinstance otherwise. An instance-update consists of inserting/deleting an index in S . Lemma B.10.
Chan’s subset union problem described above is in the complexity class Π dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is a triple ( X , S, e ), where X = { X , . . . , X n } is a collection of n setsfrom the universe [ m ], S ⊆ [ n ] is a set of indices, and e ∈ [ m ] is an element in the universe[ m ]. It is an YES instance if e ∈ ∪ i ∈ S X i and a NO instance otherwise. An instance-updateconsists of inserting/deleting an index in S and/or changing the element e .It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes an index j ∈ S . The verifier outputs ( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff e ∈ X j .The lemma holds since Chan’s subset union problem is in coNP dy if the verifier V ′ (for Chan’s subsetunion) can use an oracle for the problem D . We now explain why this is the case. Specifically,a proof π ′ t for V ′ encodes an element e ∈ [ m ]. Upon receiving this proof π ′ t , the verifier V ′ callsthe oracle for D on the instance ( X , S, e ). If this oracle returns a NO answer, then we know that e / ∈ ∪ i ∈ S X j , and hence the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 0 and y ′ t = 1. Otherwise, if theoracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 1 and y ′ t = 0.3 -vs- diameter: An instance of this problem consists of an undirected graph G = ( V, E ) on | V | = N nodes (say). It is an YES instance if the diameter of G is at most 3, and a NO instanceif the diameter of G is at least 4. An instance-update consists of inserting/deleting an edge in G . Lemma B.11.
The -vs- diameter problem described above is in the complexity class Π dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is a triple ( G, u, v ), where G = ( V, E ) is a graph on | V | = N nodes (say),and u, v ∈ V are two nodes in G . It is an YES instance if the distance (length of the shortestpath) between u and v is at most 3, and a NO instance otherwise. An instance-update consistsof inserting/deleting an edge in the graph G and changing the nodes u, v (the node-set V remains unchanged).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes a path P between u and v in G . The verifieroutputs ( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff the length of P is at most 3 (YESinstance).The lemma holds since the 3-vs-4 diameter problem is in coNP dy if the verifier V ′ (for 3-vs-4diameter) can use an oracle for the problem D . We now explain why this is the case. Specifically,a proof π ′ t for V ′ encodes two nodes u, v ∈ V in G . Upon receiving this proof π ′ t , the verifier V ′ calls the oracle for D on the instance ( G, u, v ). If this oracle returns a NO answer, then we knowthat the distance between u and v is at least 4 (hence, the diameter of G is also at least 4), andthe verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 0 and y ′ t = 1. Otherwise, if the oracle returns YES, thenthe verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 1 and y ′ t = 0.66 uclidean k center: An instance of this problem consists of a set of points X ⊆ R d and athreshold T ∈ R . It is an YES instance if there is a subset C ⊆ X of points of size at most k (i.e., | C | ≤ k ) such that every point in X is within a distance of T from some point in C ; and it is aNO instance otherwise. The parameter k is such that k = O (polylog( n )) where n bits are neededto encode an instance. An instance-update consists of inserting or deleting a point in X . Lemma B.12.
The Euclidean k center problem described above is in the complexity class Σ dy .Proof. (Sketch) We first define a new dynamic problem D as follows. • D : An instance of D is a triple ( X, C, T ), where X ⊆ R d is a set of points, C ⊆ X is a subsetof points of size at most k (i.e., | C | ≤ k ), and T ∈ R is a threshold. It is an YES instance ifthere is a point x ∈ X that is more than T distance away from every point in C , and a NOinstance otherwise. An instance-update consists of inserting/deleting a point in X and/orinserting/deleting a point in C (ensuring that C remains a subset of X ).It is easy to check that the problem D is in NP dy as per Definition 7.3. Specifically, in thisproblem a proof π t for the verifier encodes a point p ∈ X \ C . The verifier outputs ( x t , y t ) where x t = y t ∈ { , } , and x t = 1 iff the point p is more than T distance away from every point in C (YES instance). Note that the verifier can check if this is the case in O ( k ) = O (polylog( n ))time, since there are at most k points in C .The lemma holds since the Euclidean k center problem is in NP dy if the verifier V ′ (for Euclidean k center) can use an oracle for the problem D . We now explain why this is the case. Specifically,a proof π ′ t for V ′ encodes a subset C ⊆ X of size at most k . Upon receiving this proof π ′ t , theverifier V ′ calls the oracle for D on the instance ( X, C, T ). If this oracle returns a NO answer, thenwe know that every point in X is within a distance of T from some point in C , and the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 1. Otherwise, if the oracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = y ′ t = 0. k -edge connectivity: An instance of this problem consists of an undirected graph G = ( V, E ) on | V | = N nodes (say). It is an YES instance if the graph G is k -edge connected, and a NO instanceotherwise. We assume that k = O (polylog( N )). An instance-update consists of inserting/deletingan edge in G (the node-set V remains unchanged). Lemma B.13.
The k -edge connectivity problem described above is in the complexity class Π dy .Proof. (Sketch) The lemma holds since the k -edge connectivity problem is in coNP dy if the verifier V ′ (for k -edge connectivity) can use an oracle for the dynamic connectivity problem (which is in NP dy ). We now explain why this is the case. Specifically, a proof π ′ t for V ′ encodes a subset E ′ ⊆ E of at most k edges in G . Upon receiving this proof π ′ t , the verifier V ′ calls the oracle for dynamicconnectivity on the instance G ′ = ( V, E \ E ′ ). If this oracle returns a NO answer, then we knowthat the graph G becomes disconnected if we delete the (at most k ) edges E ′ from G , and hence thegraph G is not k -edge connected. Therefore, in this scenario the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 0 and y ′ t = 1. Otherwise, if the oracle returns YES, then the verifier V ′ outputs ( x ′ t , y ′ t ) where x ′ t = 1 and y ′ t = 0. C Reformulations of DNF dy : proof Proposition C.1 (Restatement of Proposition 11.9) . An algorithm with the update and query timeare at most O ( u ( m )) for any one of the following problems implies algorithms with the same updateand query time for all other problems: . DNF dy on an formula F with m clauses,2. Allwhite dy on an graph G = ( L, R, E ) where | R | = m clauses,3. Indep dy on a hypergraph H with m edges, and4. OV dy on a matrix V ∈ { , } n × m .Proof. We will show how to map any instance of each problem to an instance of another problem.It is clear from the mapping how we should handle the updates.(1 → F, φ ) of DNF dy where F has n variables and m clauses, wecan construct G = ( L, R, E ) where | L | = 2 n and | R | = m . We write L = ( l , . . . , l n ) and R = ( r , . . . , r m ). If a clause C j contains x i , then ( l i − , r j ) ∈ E . If C j contains ¬ x i , then ( l i , r j ).If φ ( x i ) = 1, then l i − is white and l i is black. Otherwise, l i − is black and l i is white. It is easyto see that F ( φ ) iff there is a node in R whose neighbors are all white.(2 → G = ( L, R, E ) of Allwhite dy where nodes in L are colored, weconstruct a hypergraph H = ( V, E H ) where V = L and, for each j , the edge e j of E H is a set ofneighbors of r j in G . Let S ⊆ V be the set corresponding to white nodes in L . It holds that S is not independent in H iff there is a node in R whose neighbors are all white.(3 → H, S ) of Indep dy where H = ( V, E ) where V = ( v , . . . , v n ) and E = ( e , . . . , e m ), we construct a formula F with variables X = ( x , . . . , x n ) and m clauses. Foreach e j ∈ E , we construct a clause C j where C j contains x i iff v i ∈ e j . For each v i ∈ S , we set φ ( x i ) = 1, otherwise φ ( x i ) = 0. Therefore, F ( φ ) = 1 iff S is not independent in H .(2 ⇄ G = ( L, R, E ) of Allwhite dy where nodes in L are colored, thecorresponding instance of OV dy is the matrix V ∈ { , } n × m where V is a bi-adjacency matrix of G , i.e. v ij = 1 iff ( l i , r j ) ∈ E . The vector u corresponds to all black nodes i.e. u i = 1 iff l i is black.It hols that a node r j ∈ R whose neighbors are all white iff r j has no black neighbor iff a vector v j ∈ V where u T v j = 0. D An Issue Regarding Promise Problems and P dy -reductions Even, Selman and Yacobi [ESY84, Theorem 4] show that there is a promise problem P where P ∈ NP ∩ coNP but P is NP -hard under Turing reduction (see also the survey by Goldreich[Gol06]). As P dy -reduction can be viewed as a dynamic version of Turing reduction, in this section,we prove that there is an dynamic problem with similar properties: Theorem D.1.
There is a promise problem D such that D ∈ NP dy ∩ coNP dy but D is NP dy -hard. Before proving, we give some discussion. This seemingly contradictory situations, for bothstatic and dynamic problems, are only because the reduction can “exploit” don’t care instances ofa promise problem D . If D does not contain any don’t care instance, then Theorem D.1 cannothold assuming that NP dy coNP dy .Suppose by contradiction that D ∈ NP dy ∩ coNP dy , D is NP dy -hard, and D does not containany don’t care instance. Proposition 8.2 (3) implies that NP dy ⊆ NP dy ∩ coNP dy , contradicting theassumption that NP dy coNP dy . In other words, assuming that NP dy coNP dy , this explains whywe need the condition that D does not contain any don’t care instance in Proposition 8.2 (3).Now, we prove Theorem D.1. Proof.
The proof are analogous to the statement for static problems in [ESY84, Theorem 4]. Theproblem D is the problem where we are given two instances ( F , φ ) , ( F , φ ) from the DNF dy F , φ ) , ( F , φ ) is a yes-instance if F ( φ ) = 1 and F ( φ ) = 0. It is a no-instance if F ( φ ) = 0 and F ( φ ) = 1. Otherwise, it is a don’t care instance. At each step, we can updateeither a variable assignment φ or φ . It is easy to see that this problem is in NP dy ∩ coNP dy .Now, there is a P dy -reduction from First-DNF dy to D essentially by using the same proof as inLemma 11.5. So we omitted it in this version. References [Aar17] Scott Aaronson. P=?NP.
Electronic Colloquium on Computational Complexity(ECCC) , 24:4, 2017. 2[AB09] Sanjeev Arora and Boaz Barak.
Computational Complexity - A Modern Approach .Cambridge University Press, 2009. 9[ABDN18] Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequencesof falsifying SETH and the orthogonal vectors conjecture. In
STOC , pages 253–266.ACM, 2018. 2[AHdLT05] Stephen Alstrup, Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Main-taining information in fully dynamic trees with top trees.
ACM Trans. Algorithms ,1(2):243–264, 2005. 14, 52, 57[AOSS18] Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon. Fully dynamicmaximal independent set with sublinear update time. In
STOC , pages 815–826. ACM,2018. 13[AR05] Dorit Aharonov and Oded Regev. Lattice problems in NP ∩ coNP. J. ACM , 52(5):749–765, 2005. announced at FOCS’04. 2[AW14] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply stronglower bounds for dynamic problems. In
FOCS , pages 434–443, 2014. 2, 10, 11, 15, 47[BC16] Aaron Bernstein and Shiri Chechik. Deterministic decremental single source shortestpaths: beyond the o(mn) bound. In
STOC , pages 389–397. ACM, 2016. 13[BC17] Aaron Bernstein and Shiri Chechik. Deterministic partially dynamic single sourceshortest paths for sparse graphs. In
SODA , pages 453–469. SIAM, 2017. 13[BCHN18] Sayan Bhattacharya, Deeparnab Chakrabarty, Monika Henzinger, and DanuponNanongkai. Dynamic algorithms for graph coloring. In
SODA , pages 1–20. SIAM,2018. 13[Ber17] Aaron Bernstein. Deterministic partially dynamic single source shortest paths inweighted graphs. In
ICALP , volume 80 of
LIPIcs , pages 44:1–44:14. Schloss Dagstuhl- Leibniz-Zentrum fuer Informatik, 2017. 13[BGS15] Surender Baswana, Manoj Gupta, and Sandeep Sen. Fully dynamic maximal matchingin o(log n) update time.
SIAM J. Comput. , 44(1):88–113, 2015. Announced at FOCS’11.13 69BHI18] Sayan Bhattacharya, Monika Henzinger, and Giuseppe F. Italiano. Deterministic fullydynamic data structures for vertex cover and matching.
SIAM J. Comput. , 47(3):859–887, 2018. 13[BHN16] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. New deterministicapproximation algorithms for fully dynamic matching. In
STOC , pages 398–411. ACM,2016. 13[BHN17] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. Fully dynamicapproximate maximum matching and minimum vertex cover in O (log3 n ) worst caseupdate time. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19 ,pages 470–489, 2017. 14[CDL +
16] Marek Cygan, Holger Dell, Daniel Lokshtanov, Dániel Marx, Jesper Nederlof, YoshioOkamoto, Ramamohan Paturi, Saket Saurabh, and Magnus Wahlström. On problemsas hard as CNF-SAT.
ACM Trans. Algorithms , 12(3):41:1–41:24, 2016. 2[CGI +
16] Marco L. Carmosino, Jiawei Gao, Russell Impagliazzo, Ivan Mihajlin, RamamohanPaturi, and Stefan Schneider. Nondeterministic extensions of the strong exponentialtime hypothesis and consequences for non-reducibility. In
ITCS , pages 261–270. ACM,2016. 3[Cha10] Timothy M. Chan. A dynamic data structure for 3-d convex hulls and 2-d nearestneighbor queries.
J. ACM , 57(3):16:1–16:15, 2010. 28[CKL17] Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen. Tight cell probebounds for succinct boolean matrix-vector multiplication.
CoRR , abs/1711.04467, 2017.To appear in STOC’18. 2[CN15] Timothy M. Chan and Yakov Nekrich. Towards an optimal method for dynamic planarpoint location. In
IEEE 56th Annual Symposium on Foundations of Computer Science,FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015 , pages 390–409, 2015. 14[Con92] Anne Condon. The complexity of stochastic games.
Inf. Comput. , 96(2):203–224, 1992.2[Coo03] Stephen A. Cook. The importance of the P versus NP question.
J. ACM , 50(1):27–29,2003. 2[Coo06] Stephen Cook. The P versus NP problem.
The millennium prize problems , page 86,2006. 2[CT16] Timothy M. Chan and Konstantinos Tsakalidis. Optimal deterministic algorithms for2-d and 3-d shallow cuttings.
Discrete & Computational Geometry , 56(4):866–881,2016. 28[DKM +
15] Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and ThomasZeume. Reachability is in dynfo. In
ICALP (2) , volume 9135 of
Lecture Notes inComputer Science , pages 159–170. Springer, 2015. 1270DW13] Evgeny Dantsin and Alexander Wolpert. Exponential complexity of satisfiability test-ing for linear-size boolean formulas. In
CIAC , volume 7878 of
Lecture Notes in Com-puter Science , pages 110–121. Springer, 2013. 2[DZ18] Yuhao Du and Hengjie Zhang. Improved algorithms for fully dynamic maximal inde-pendent set.
CoRR , abs/1804.08908, 2018. 13[EGIN92] David Eppstein, Zvi Galil, Giuseppe F. Italiano, and Amnon Nissenzweig.Sparsification-a technique for speeding up dynamic graph algorithms (extended ab-stract). In
FOCS , pages 60–69. IEEE Computer Society, 1992. 1, 51[EIT +
92] David Eppstein, Giuseppe F. Italiano, Roberto Tamassia, Robert Endre Tarjan, JefferyWestbrook, and Moti Yung. Maintenance of a minimum spanning forest in a dynamicplane graph.
J. Algorithms , 13(1):33–54, 1992. 14[ESY84] Shimon Even, Alan L. Selman, and Yacov Yacobi. The complexity of promise problemswith applications to public-key cryptography.
Information and Control , 61(2):159–173,1984. 68[Fre78] Michael L. Fredman. Observations on the complexity of generating quasi-gray codes.
SIAM J. Comput. , 7(2):134–146, 1978. 14, 24[Fre85] Greg N. Frederickson. Data structures for on-line updating of minimum spanning trees,with applications.
SIAM J. Comput. , 14(4):781–798, 1985. 1, 14, 51, 57[Fre97] Greg N. Frederickson. Ambivalent data structures for dynamic 2-edge-connectivity andk smallest spanning trees.
SIAM J. Comput. , 26(2):484–538, 1997. 14, 57[GGH18] Shafi Goldwasser, Ofer Grossman, and Dhiraj Holden. Pseudo-deterministic proofs. In
ITCS , volume 94 of
LIPIcs , pages 17:1–17:18. Schloss Dagstuhl - Leibniz-Zentrum fuerInformatik, 2018. 49[GHR91] Raymond Greenlaw, James Hoover, and Walter L Ruzzo. A compendium of problemscomplete for p (preliminary). 1991. 12[GIKW17] Jiawei Gao, Russell Impagliazzo, Antonina Kolokolova, and R. Ryan Williams. Com-pleteness for first-order properties on sparse structures with algorithmic applications.In
SODA , pages 2162–2181. SIAM, 2017. 2[GK18] Manoj Gupta and Shahbaz Khan. Simple dynamic algorithms for maximal independentset and other problems.
CoRR , abs/1804.01823, 2018. 13[Gol06] Oded Goldreich. On promise problems: A survey. In
Essays in Memory of ShimonEven , volume 3895 of
Lecture Notes in Computer Science , pages 254–290. Springer,2006. 68[HdLT98] Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Poly-logarithmic determin-istic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, andbiconnectivity. In
STOC , pages 79–89. ACM, 1998. 1, 51, 53[HI02] William Hesse and Neil Immerman. Complete problems for dynamic complexity classes.In
LICS , page 313. IEEE Computer Society, 2002. 1271HK99] Monika Rauch Henzinger and Valerie King. Randomized fully dynamic graph algo-rithms with polylogarithmic time per operation.
J. ACM , 46(4):502–516, 1999. ap-peared in STOC’95. 1, 14, 52, 53, 56[HKNS15] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Sara-nurak. Unifying and strengthening hardness for dynamic problems via the onlinematrix-vector multiplication conjecture. In
STOC , pages 21–30, 2015. 2, 13[HR03] Thore Husfeldt and Theis Rauhe. New lower bound techniques for dynamic partialsums and related problems.
SIAM J. Comput. , 32(3):736–753, 2003. 8, 60[HRT18] Jacob Holm, Eva Rotenberg, and Mikkel Thorup. Dynamic bridge-finding in Õ (log2 n ) amortized time. In SODA , pages 35–52. SIAM, 2018. 1[IW97] Russell Impagliazzo and Avi Wigderson.
P = BPP if E requires exponential circuits:Derandomizing the XOR lemma. In STOC , pages 220–229. ACM, 1997. 2, 13[JMV15] Hamid Jahanjou, Eric Miles, and Emanuele Viola. Local reductions. In
ICALP (1) ,volume 9134 of
Lecture Notes in Computer Science , pages 749–760. Springer, 2015. 2[Kar99] David R. Karger. Random sampling in cut, flow, and network design problems.
Math.Oper. Res. , 24(2):383–413, 1999. 55[KKM13] Bruce M. Kapron, Valerie King, and Ben Mountjoy. Dynamic graph connectivity inpolylogarithmic worst case time. In
Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA,January 6-8, 2013 , pages 1131–1142, 2013. 1, 13, 51, 54, 56, 60, 61[KKPT16] Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup. Fasterworst case deterministic dynamic connectivity. In
ESA , volume 57 of
LIPIcs , pages53:1–53:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016. 51[KvM02] Adam R. Klivans and Dieter van Melkebeek. Graph nonisomorphism has subexpo-nential size proofs unless the polynomial-time hierarchy collapses.
SIAM J. Comput. ,31(5):1501–1526, 2002. announced at STOC’99. 2[Lad75] Richard E. Ladner. On the structure of polynomial time reducibility.
J. ACM ,22(1):155–171, 1975. 2, 13[Lar] Kasper Green Larsen. Logarithmic cell probe lower bounds for non-deterministic staticdata structures. 8[Lau83] Clemens Lautemann. BPP and the polynomial hierarchy.
Inf. Process. Lett. , 17(4):215–217, 1983. 13[LW17] Kasper Green Larsen and R. Ryan Williams. Faster online matrix-vector multiplication.In
SODA , pages 2182–2189. SIAM, 2017. 2[LWY18] Kasper Green Larsen, Omri Weinstein, and Huacheng Yu. Crossing the logarithmicbarrier for dynamic boolean data structure lower bounds.
STOC , 2018. 1[Mil99] Peter Bro Miltersen. Cell probe complexity - a survey. In
In 19th Conference onthe Foundations of Software Technology and Theoretical Computer Science (FSTTCS),1999. Advances in Data Structures Workshop , 1999. 9, 14, 2472MP91] Nimrod Megiddo and Christos H. Papadimitriou. On total functions, existence theo-rems and computational complexity.
Theor. Comput. Sci. , 81(2):317–324, 1991. 49,50[MSS90] Satoru Miyano, Shuji Shiraishi, and Takayoshi Shoudai.
A list of P-complete problems .Kyushu Univ., Res. Inst. of Fundamental Information Science, 1990. 12[MSVT94] Peter Bro Miltersen, Sairam Subramanian, Jeffrey Scott Vitter, and Roberto Tamassia.Complexity models for incremental computation.
Theor. Comput. Sci. , 130(1):203–236,1994. 12[NS17] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest with worst-case update time: adaptive, las vegas, and o(n1/2 - ǫ )-time. In STOC , pages 1122–1129.ACM, 2017. 1, 13[NSW17] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamicminimum spanning forest with subpolynomial worst-case update time. In
FOCS , pages950–961. IEEE Computer Society, 2017. 1, 13, 51, 54[O’D14] Ryan O’Donnell.
Analysis of Boolean Functions . Cambridge University Press, NewYork, NY, USA, 2014. 45[OSSW18] Krzysztof Onak, Baruch Schieber, Shay Solomon, and Nicole Wein. Fully dynamicMIS in uniformly sparse graphs. In
ICALP , volume 107 of
LIPIcs , pages 92:1–92:14.Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018. 13[Ove87] Mark H. Overmars.
Design of Dynamic Data Structures . Springer-Verlag New York,Inc., Secaucus, NJ, USA, 1987. 14[Pat08] Mihai Patrascu.
Lower Bound Techniques for Data Structures . PhD thesis, Cambridge,MA, USA, 2008. AAI0821553. 8[Pat10] Mihai Patrascu. Towards polynomial lower bounds for dynamic problems. In
STOC ,pages 603–610. ACM, 2010. 2, 15[PD06] Mihai Patrascu and Erik D. Demaine. Logarithmic lower bounds in the cell-probemodel.
SIAM J. Comput. , 35(4):932–963, 2006. announced at STOC’04 and SODA’04.1, 8[PT07] Mihai Patrascu and Mikkel Thorup. Planning for fast connectivity updates. In
FOCS ,pages 263–271. IEEE Computer Society, 2007. 52[Rei87] John H. Reif. A topological approach to dynamic graph connectivity.
Inf. Process.Lett. , 25(1):65–70, 1987. 12[Ric08] Elaine Rich.
Automata, computability and complexity: theory and applications . PearsonPrentice Hall Upper Saddle River, 2008. 49[RR96] G. Ramalingam and Thomas W. Reps. On the computational complexity of dynamicgraph problems.
Theor. Comput. Sci. , 158(1&2):233–277, 1996. 12[San07] Piotr Sankowski. Faster dynamic matchings and vertex connectivity. In
SODA , pages118–126. SIAM, 2007. 1 73Sip83] Michael Sipser. A complexity theoretic approach to randomness. In
STOC , pages330–335. ACM, 1983. 13[SS12] Rahul Santhanam and Srikanth Srinivasan. On the limits of sparsification. In
ICALP(1) , volume 7391 of
Lecture Notes in Computer Science , pages 774–785. Springer, 2012.2[ST81] Daniel Dominic Sleator and Robert Endre Tarjan. A data structure for dynamic trees.In
STOC , pages 114–122. ACM, 1981. 7, 14, 52, 61[SW18] Shay Solomon and Nicole Wein. Improved dynamic graph coloring. In
ESA , volume112 of
LIPIcs , pages 72:1–72:16. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik,2018. 13[SW19] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster,stronger, and simpler. 2019. To appear in SODA’19. 51[SZ16] Thomas Schwentick and Thomas Zeume. Dynamic complexity: recent updates.
SIGLOG News , 3(2):30–52, 2016. 12[Tho00] Mikkel Thorup. Near-optimal fully-dynamic graph connectivity. In
STOC , pages 343–350. ACM, 2000. 1[Tho01] Mikkel Thorup. Fully-dynamic min-cut. In
STOC , pages 224–230. ACM, 2001. 1, 53,54, 55, 57[TK00] Mikkel Thorup and David R. Karger. Dynamic graph algorithms with applications. In
Algorithm Theory - SWAT 2000 , pages 1–9, Berlin, Heidelberg, 2000. Springer BerlinHeidelberg. 55, 61[Wil] R Ryan Williams. Some estimated likelihoods for computational complexity. 2[Wil13] Ryan Williams. Improving exhaustive search implies superpolynomial lower bounds.
SIAM J. Comput. , 42(3):1218–1244, 2013. Announced at STOC’10. 2[WS05] Volker Weber and Thomas Schwentick. Dynamic complexity theory revisited. In
STACS , volume 3404 of
Lecture Notes in Computer Science , pages 256–268. Springer,2005. 12[Wul17] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improved worst-case update time. In
STOC , pages 1130–1143. ACM, 2017. 1, 13[WY14] Yaoyu Wang and Yitong Yin. Certificates in data structures. In
ICALP (1) , volume8572 of
Lecture Notes in Computer Science , pages 1039–1050. Springer, 2014. 8[Yao82] Andrew Chi-Chih Yao. Theory and applications of trapdoor functions (extended ab-stract). In
FOCS , pages 80–91. IEEE Computer Society, 1982. 13[Yin10] Yitong Yin. Cell-probe proofs.