aa r X i v : . [ m a t h . L O ] F e b Commutative Action Logic
Stepan L. KuznetsovSteklov Mathematical Institute of RASFebruary 24, 2021
Abstract
We prove undecidability and pinpoint the place in the arithmeticalhierarchy for commutative action logic, that is, the equational theory ofcommutative residuated Kleene lattices (action lattices), and infinitarycommutative action logic, the equational theory of *-continuous actionlattices. Namely, we prove that the former is Σ -complete and the latter isΠ -complete. Thus, the situation is the same as in the more well-studiednon-commutative case. The methods used, however, are different: weencode infinite and circular computations of counter (Minsky) machines. The concept of action lattice, introduced by Pratt [19] and Kozen [7], combinesseveral algebraic structures: a partially ordered monoid with residuals (“mul-tiplicative structure”), a lattice (“additive structure”) sharing the same partialorder, and Kleene star. (Pratt introduced the notion of action algebra, whichbears only a semi-lattice structure with join, but not meet. Action lattices aredue to Kozen.)
Definition.
An action lattice is a structure hA ; (cid:22) , · , , , ⊸ , ⊸ , ∨ , ∧ , ∗ i , where:1. (cid:22) is a partial order on A ;2. is the smallest element for (cid:22) , that is, (cid:22) a for any a ∈ A ;3. hA ; · , i is a monoid;4. ⊸ and ⊸ are residuals of the product ( · ) w.r.t. (cid:22) , that is: b (cid:22) a ⊸ c ⇐⇒ a · b (cid:22) c ⇐⇒ a (cid:22) c ⊸ b ;5. hA ; (cid:22) , ∨ , ∧i is a lattice;6. for each a ∈ A , a ∗ = min (cid:22) { b | (cid:22) b and a · b (cid:22) b } .1n important subclass of action lattices is formed by *-continuous actionlattices. Definition.
Action lattice A is *-continuous, if for any a ∈ A we have a ∗ =sup (cid:22) { a n | n ≥ } , where a n = a · . . . · a ( n times) and a = .Interesting examples of action lattices are mostly *-continuous; non-*-conti-nuous action lattices also exist, but are constructed artificially.The equational theory for the class of action lattices or its subclass (e.g.,the class of *-continuous action lattices) is the set of all statements of the form A (cid:22) B , where A and B are formulae (terms) built from variables and constants and using action lattice operations, which are true in any action lattice fromthe given class under any valuation of variables. More precisely, the previoussentence defines the in equational theory, but in the presence of lattice operationsit is equivalent to the equational one: A (cid:22) B can be equivalently representedas A ∨ B = B .In a different terminology, equational theories of classes of action lattices areseen as algebraic logics. These logics are substructural, extending the multi-plicative-additive (“full”) Lambek calculus [14, 6], which is a non-commutativeintuitionistic variant of Girard’s linear logic [4].The equational theory of all action lattices is called action logic and denotedby ACT . For the subclass of *-continuous action lattices, the equational theoryis infinitary action logic
ACT ω , introduced by Buszkowski and Palka [1, 18, 2].The interest to such a weak language—only (in)equations—is motivated bycomplexity considerations. Namely, for the next more expressible language, thelanguage of Horn theories, the corresponding theory of the class of *-continuousaction lattices is already Π -complete [8], that is, has a non-arithmetical com-plexity level. In contrast, ACT ω is Π -complete, as shown by Buszkowski andPalka [1, 18]. For the general case, ACT is Σ -complete [9, 11], which is al-ready the maximal possible complexity: iteration in action lattices in generalallows a finite axiomatization, unlike the *-continuous situation, which requiresinfinitary mechanisms.Kleene algebras and their extensions are used in computer science for reason-ing about program correctness. In particular, elements of an action lattice areintended to represent types of actions performed by a computing system (say,transitions in a finite automaton). Multiplication corresponds to composition ofactions, Kleene star is iteration (perform an action several times, maybe zero).Residuals represent conditional types of actions. An action of type a ⊸ b , beingpreceded by an action of type a , gives an action of type b . Dually, b ⊸ a is thetype of actions which require to be followed by an action of type a to achieve b .The monoid operation (multiplication) in action lattices is in general non-commutative, since so is, in general, composition of actions. However, in hisoriginal paper Pratt designates the subclass of commutative action algebras:“A commutative action algebra is an action algebra satisfying ab = ba . Whereas action logic in general is neutral as to whether ab a and b sequentially or concurrently, commutative actionlogic in effect commits to concurrency”. [19]Later on commutative action algebras (lattices) were not studied systematically.Concurrent computations are usually treated using a more flexible approach,using a specific parallel execution connective, k , in the framework of concurrentKleene algebras, CKA [5], and its extensions. In particular, the author is notaware of a study of equational theories (algebraic logics) for commutative actionlattices.Commutative versions of ACT and
ACT ω are denoted by CommACT and
CommACT ω respectively. In this article, we prove undecidability andpinpoint the position in the arithmetical hierarchy for both CommACT and
CommACT ω . Namely, we prove that:1. CommACT is Σ -complete;2. CommACT ω is Π -complete.The second result was presented at the 3rd DaL´ı Workshop and published in itsproceedings [10]. The first result is new.The rest of the article is organized as follows. We start with the *-continuouscase. In Section 2 we present an infinitary sequent calculus for CommACT ω ,prove cut elimination and the Π upper bound. This construction basicallycopies Palka’s [18] reasoning in the non-commutative case, for ACT ω . Commu-tativity does not add anything significantly new here.In contrast, for proving Π -hardness (lower bound), which is performed inSection 3, we could not have used Buszkowski’s argument [1], since it uses areduction from the totality problem for context-free grammars, which is in-strinsically non-commutative. Instead, we use an encoding of 3-counter Minskymachines, which are commutative-friendly. The encoding of Minsky instructionsand configurations is taken from the work of Lincoln et al. [15], with minor mod-ifications. The principal difference from [15], however, is the usage of Kleenestar to model non-halting behaviour of Minsky machines (while Lincoln et al.use the exponential modality of linear logic for modelling halting computations).In Section 4 we prove Σ -completeness for CommACT by encoding circu-lar behaviour of Minsky machines and the technique of effective inseparability(Myhill’s theorem). This argument is even more straightforward than the onefrom [9, 11], since we do not need intermediate context-free grammars.Section 5 concludes the article by showing directions of further research inthe area.
We present an infinitary sequent calculus for
CommACT ω , which is a com-mutative version of Palka’s system for ACT ω . Formulae of CommACT ω are3uilt from a countable set of variables Var = { p, q, r, . . . } and constants and using four binary connectives, ⊸ , · , ∨ , and ∧ , and one unary connective, ∗ .(Due to commutativity, B ⊸ A is always equivalent to A ⊸ B , so we have onlyone residual here.) Sequents are expressions of the form Γ ⊢ A , where Γ is amultiset of formulae (that is, the number of occurrences matters, while the or-der does not) and A is a formula. In our notations, capital Greek letters denotemultisets of formulae and capital Latin letters denote formulae.Axioms and inference rules of CommACT ω are as follows: A ⊢ A Id Γ , ⊢ C L Γ ⊢ C Γ , ⊢ C L ⊢ R Π ⊢ A Γ , B ⊢ C Γ , Π , A ⊸ B ⊢ C ⊸ L A, Π ⊢ B Π ⊢ A ⊸ B ⊸ R Γ , A, B ⊢ C Γ , A · B ⊢ C · L Π ⊢ A ∆ ⊢ B Π , ∆ ⊢ A · B · R Γ , A ⊢ C Γ , B ⊢ C Γ , A ∨ B ⊢ C ∨ L Π ⊢ A Π ⊢ A ∨ B ∨ R Π ⊢ B Π ⊢ A ∨ B ∨ R Γ , A ⊢ C Γ , A ∧ B ⊢ C ∧ L Γ , B ⊢ C Γ , A ∧ B ⊢ C ∧ L Π ⊢ A Π ⊢ B Π ⊢ A ∧ B ∧ R (cid:0) Γ , A n ⊢ C (cid:1) ∞ n =0 Γ , A ∗ ⊢ C ∗ L ω Π ⊢ A . . . Π n ⊢ A Π , . . . , Π n ⊢ A ∗ ∗ R n , n ≥ ⊢ A Γ , A ⊢ C Γ , Π ⊢ C Cut
The set of derivable sequents (theorems) is the smallest set which includes allinstances of axioms and which is closed under inference rules. Thus, derivationtrees in
CommACT ω may have infinite branching (at instances of ∗ L , whichis an ω -rule), but are required to be well-founded (infinite paths are forbidden).Let us formulate several properties of CommACT ω and give proof sketches,following Palka [18], but in the commutative setting. The proofs are essentiallythe same as Palka’s ones; we give their sketches here in order to make this articlelogically self-contained.The sequents of CommACT ω presented above enjoy a natural algebraicinterpretation on commutative action lattices. Namely, given an action lattice A , we intepret variables as arbitrary elements of A , by a valuation function v : Var → A , and then propagate this interpretation to formulae. Let us denotethe interpretation of formula A under valuation v by ¯ v ( A ). A sequent of the form A , . . . , A n ⊢ B ( n ≥
1) is true under this interpretation if ¯ v ( A ) · . . . · ¯ v ( A n ) (cid:22) ¯ v ( B ) (due to commutativity of · , the order of A i ’s does not matter). For n = 0,the sequent ⊢ B is declared true if (cid:22) ¯ v ( B ). A soundness-and-completenesstheorem holds: Theorem 1.
A sequent is derivable in
CommACT ω if and only if it is true inall commutative *-continuous action lattices under all valuations of variables. roof. The “only if” part (soundness) is proved by (transfinite) induction on thestructure of derivation. For the “if” part (completeness), we use the standardLindenbaum – Tarski canonical model construction.Thus,
CommACT ω is indeed an axiomatization for the equational theoryof commutative *-continuous action lattices.In order to facilitate induction on derivation in the infinitary setting, wedefine the depth of a derivable sequent in the following way. For an ordinal α ,let us define the set S α by transfinite recursion: S = ∅ ; S α +1 = { Γ ⊢ A | Γ ⊢ A is derivable by one rule application from S α } ; S λ = [ α<λ S α for λ ∈ Lim.(In particular, S is the set of all axioms of CommACT ω .) For a derivablesequent Γ ⊢ A let d (Γ ⊢ A ) = min { α | (Γ ⊢ A ) ∈ S α } be its depth.The complexity of a formula A is defined as the total number of subformulaoccurrences in it. Theorem 2.
The calculus
CommACT ω enjoys cut elimination, that is, anyderivable sequent can be derived without using Cut.Proof. First we eliminate one cut on the bottom of a derivation, that is, showthat if Π ⊢ A and Γ , A ⊢ C are cut-free derivable, then so is Γ , Π ⊢ C . This isestablished by triple induction on the following parameters: (1) complexity of A ; (2) depth of Π ⊢ A ; (3) depth of Γ , A ⊢ C . See [18, Theorem 3.1] for details.Next, let a sequent Γ ⊢ B be derivable using cuts. Let d (Γ ⊢ B ) be its depth,counted for the calculus with Cut as an official rule. Let us show that Γ ⊢ B iscut-free derivable by induction on α = d (Γ ⊢ B ). Notice that α is not a limitordinal: otherwise, (Γ ⊢ B ) ∈ S β for some β < α . Also α = 0. Thus, α = β + 1.The sequent Γ ⊢ B is immediately derivable, by one rule application, from aset of sequents from S β , that is, of smaller depth. By the induction hypothesis,these sequents are cut-free derivable. Now consider the rule which was used toderive Γ ⊢ B . If it is not Cut , then Γ ⊢ B is also cut-free derivable. If it is Cut ,we apply the reasoning from the beginning of this proof and establish cut-freederivability of Γ ⊢ B .The situation with CommACT , the algebraic logic of all commutative ac-tion lattices, is different. This logic can be axiomatized, in the presence of
Cut ,by the following two axioms and an inductive rule for iteration: ⊢ A ∗ ∗ R A, A ∗ ⊢ A ∗ ∗ R ind ⊢ B A, B ⊢ BA ∗ ⊢ B ∗ L ind and the same rules for other connectives, as in CommACT ω . This axioma-tization of Kleene star exactly corresponds to its definition as a ∗ = min (cid:22) { b | (cid:22) b and a · b (cid:22) b } . Thus, soundness and completeness are established by astandard Lindenbaum – Tarski argument:5 heorem 3. A sequent is derivable in
CommACT if and only if it is true inall commutative action lattices under all valuations of variables.
This calculus for
CommACT , however, does not enjoy cut elimination, andthere is no known cut-free formulation of
CommACT . Π Upper Bound for CommACT ω For
CommACT , there is a trivial Σ upper bound: any logic axiomatized by acalculus with finite proofs is recursively enumerable. In Section 4 we show thatthis complexity bound is exact, i.e., CommACT is Σ -complete.For CommACT ω , the situation is different. In general, such a calculus withan ω -rule can be even Π -complete [13]. In the non-commutative case, however,the complexity is much lower: ACT ω belongs to the Π complexity class [18].We show that for CommACT ω the situation is the same.In order to prove that CommACT ω belongs to the Π complexity class,we use Palka’s *-elimination technique. For each sequent, we define its n -thapproximation . Informally, we replace each negative occurrence of A ∗ with A ≤ n = ∨ A ∨ A ∨ . . . ∨ A n . The n -th approximation of a sequent A , . . . , A m ⊢ B is defined as N n ( A ) , . . . , N n ( A m ) ⊢ P n ( B ), where mappings N n and P n aredefined by joint recursion: N n ( α ) = P n ( α ) = α, α ∈ Var ∪ { , } N n ( A ⊸ B ) = P n ( A ) ⊸ N n ( B ) P n ( A ⊸ B ) = N n ( A ) ⊸ P n ( B ) N n ( A · B ) = N n ( A ) · N n ( B ) P n ( A · B ) = P n ( A ) · P n ( B ) N n ( A ∨ B ) = N n ( A ) ∨ N n ( B ) P n ( A ∨ B ) = P n ( A ) ∨ P n ( B ) N n ( A ∧ B ) = N n ( A ) ∧ N n ( B ) P n ( A ∧ B ) = P n ( A ) ∧ P n ( B ) N n ( A ∗ ) = ∨ N n ( A ) ∨ ( N n ( A )) ∨ . . . ∨ ( N n ( A )) n P n ( A ∗ ) = ( P n ( A )) ∗ (In Palka’s notation, N and P are inverted.)The *-elimination theorem, resembling Palka’s [18] Theorem 5.1, is nowformulated as follows: Theorem 4.
A sequent is derivable in
CommACT ω if and only if its n -thapproximation is derivable in CommACT ω for any n .Proof. The “only if” part is easier. We establish by induction that A ⊢ P n ( A )and N n ( A ) ⊢ A are derivable for any A : see [18, Lemma 4.3] for ACT ω ; com-mutativity does not alter this part of the proof. Next, we apply Cut severaltimes: N n ( A ) ⊢ A . . . N n ( A m ) ⊢ A m A , . . . , A m ⊢ B B ⊢ P n ( B ) N n ( A ) , . . . , N n ( A m ) ⊢ P n ( B )For the “if” part, a specific induction parameter is introduced. This param-eter is called the rank of a formula and is represented by a sequence of natural6umbers. These sequences are formally infinite, but include only zeroes start-ing from some point. For a sequent Γ ⊢ A its rank ρ (Γ ⊢ A ) is the sequence( c , c , c , . . . ), where c i is the number of subformulae of complexity i in Γ ⊢ A .The order on ranks is anti-lexicographical: ( c , c , c , . . . ) ≺ ( c ′ , c ′ , c ′ , . . . ),if there exists a natural number i such that c i < c ′ i and for any j > i we have c j = c ′ j . In any rank ( c , c , c , . . . ) of a sequent there exists such a k that c k = 0for all k > k ( k is the maximal complexity of a subformula in Γ ⊢ A ). Hence,any two ranks are comparable. Moreover, the order on ranks is well-founded.Thus, we can perform induction on ranks.The rules of CommACT ω (excluding Cut ) enjoy the following property:each premise has a smaller rank than the conclusion. In particular, this holdsfor ∗ L : despite A is copied n times, its complexity is smaller, than that of A ∗ .Thus, when going from conclusion to premise, we reduce some c i by one andincrease c i − (where i is the complexity of A ∗ ) and also some c j ’s with smallerindices. The rank gets reduced.Now we prove the “if” part of our theorem by contraposition. Suppose asequent Π ⊢ B is not derivable in CommACT ω . We shall prove that for some n the n -th approximation of this sequent is also not derivable. We proceed byinduction on ρ (Π ⊢ B ). Consider two cases. Case 1: one of the formulae in Π is of the form A ∗ . Then Π = Π ′ , A ∗ andfor some m the sequent Π ′ , A m ⊢ B is not derivable (otherwise Π ⊢ B would bederivable by ∗ L ). Since ρ (Π ′ , A m ⊢ B ) ≺ ρ (Π ′ , A ∗ ⊢ B ), we can apply the induc-tion hypothesis and conclude that for some k the sequent N k (Π ′ ) , ( N k ( A )) m ⊢ P k ( B ) is not derivable. Here N k (Π ′ ), for Π ′ = C , . . . , C s , is defined as N k ( C ), . . . , N k ( C s ).Now take n = max { m, k } . We claim that N n (Π ′ ) , N n ( A ∗ ) ⊢ P n ( B ) is notderivable. This is indeed the case, because otherwise we could derive the sequent N k (Π ′ ) , ( N k ( A )) m ⊢ P k ( B ) using cut. The sequents used in cut are N k ( C j ) ⊢ N n ( C j ), for each C j in Π ′ , ( N k ( A )) m ⊢ N n ( A ) ∗ , and P n ( B ) ⊢ P k ( B ), which arederivable (see [18, Lemma 4.4]). Case 2: no formula of Π is of the form A ∗ . Thus, our sequent cannot bederived using (immediately) the ∗ L rule. All other rules are finitary, and thereis only a finite number of possible applications of these rules (for example, for ⊸ L there is a finite number of possible splittings of the context to Γ and Π).For each of these possible rule applications, at least one of its premises shouldbe non-derivable (otherwise we derive the original sequent Π ⊢ B ).The premises have smaller ranks than Π ⊢ B , so we can apply the inductionhypothesis. This gives, for each premise, non-derivability of its k -th approxima-tion for some k . Let n be the maximum of these k ’s. Increasing k keeps eachapproximation non-derivable, and we get non-derivability of the n -th approxi-mation of the original sequent.The *-elimination technique yields the upper complexity bound: Theorem 5.
The derivability problem in
CommACT ω belongs to the Π com-plexity class. roof. By Theorem 4, derivability of a sequent is reduced to derivability of allits n -th approximations. Each n -th approximation, in its turn, is a sequentwithout negative occurrences of ∗ , that is, its derivation in CommACT ω isalways finite (does not use ∗ L ). For such sequents, the derivability problemis decidable by exhausting proof search, since all rules, except ∗ L , reduce thecomplexity of the sequent (when looking upwards). The “ ∀ n ” quantifier yieldsΠ . Π -Hardness of CommACT ω In our undecidability proofs, we encode counter machines, or Minsky machines [16],since in the commutative setting it is impossible to maintain order of letters andthus to encode Turing machines, semi-Thue systems, etc.Let us recall some basics. A counter machine operates several counters, orregisters, whose values are natural numbers. The machine itself is, at each pointof operation, in a state q taken from a finite set Q . Instructions of a countermachine are of the following forms, where r is a register and p, q, q , q are states: inc ( p, r, q ) being in state p , increase register r by 1and move to state q ; jzdec ( p, r, q , q ) being in state p , check whether the value of r is 0:if yes, move to state q ,if no, decrease r by 1 and move to state q .In what follows, we consider only deterministic counter machines, that is,for each state p there exists no more than one instruction with this p as thefirst parameter. Moreover, there is a unique state for which there is no suchinstruction, and this state is called the final one and denoted by q F . The machine halts once it reaches q F .Counter machines are used for computing partial functions on natural num-bers. One fixed register, denoted by a , is used for input/output: the machinestarts at the initial state q S with its input data (a natural number) put into a ; all other registers are assigned to 0. If the machine halts, then the resultingvalue is located in a . We may suppose that other registers hold 0; otherwisewe can add extra states and instructions to perform “garbage collection.” Ifthe machine does not halt (runs forever), the function on the given input isundefined.A configuration of a counter machine is a tuple of the form h q, c , . . . , c n i ,where q ∈ Q , n is the number of registers, and c , . . . , c n are natural numbers(values of registers). The starting configuration, on input x , is h q S , x, , . . . , i .We restrict ourselves to 3-counter machines, with only three registers: a , b , and c , as three registers are sufficient for Turing completeness. Namely, anycomputable partial function on natural numbers can be computed on a 3-countermachine as defined above. 8n accurate translation from Turing machines to 3-counter ones can be foundin Schroeppel’s memo [21]. Notice that 2-counter machines are also Turing-complete, but in a specific sense: a natural number n should be submitted as aninput not as it is, but as 2 n , and the same for output [16]; the function n n itself is not computable on 2-counter machines [21]. To avoid this inconvenience,we use 3-counter machines. Proposition 1.
A partial function f : N → N is computable if and only if f iscomputed by a 3-counter machine. [21]It will be convenient for us to use the definition of recursively enumerable (r.e., or Σ ) sets as domains of computable functions: D f = { x | f ( x ) is defined } or, in view of Proposition 1, D M = { x | M halts on input x } . Among r.e. sets,there exist Σ -complete ones. Thus, the general halting problem for 3-countermachines is Σ -complete. The dual non-halting problem is Π -complete; more-over, there exists a concrete M such that D M = { x | M does not halt on x } isΠ -complete. We prove Π -hardness of CommACT ω by reducing the non-halting problem fordeterministic 3-counter Minsky machines to derivability in CommACT ω . Ourapproach is in a sense dual to the undecidability proof for commutative proposi-tional linear logic by Lincoln et al. [15]. They use the exponential modality, ! A ,which is expanded to A n for some n , using the contraction rule. The formula A being an encoding of the instruction set of a Minsky machine, this constructionrepresents termination of Minsky computation after n steps. Dually, we use A ∗ ,which is expanded using the ω -rule, ∗ L , to an infinite series of sequents with A n for any n . This corresponds to an infinite run of the Minsky machine: it canperform arbitrarily many steps.Notice that, as in [15], we essentially use commutativity. It is needed todeliver the instruction to the correct place in the formula encoding the machineconfiguration. In the non-commutative setting, this is a separate issue, andBuszkowski’s Π -hardness proof for ACT ω [1] uses an indirect reduction fromnon-halting of Turing machines, via totality for context-free grammars.Let M be a deterministic 3-counter machine. In CommACT ω , configu-rations of M are encoded as follows. Let the set of variables include the setof states Q of M , and additionally three variables a , b , and c for counters.Configuration h q, a, b, c i is encoded as follows: q, a , . . . , a | {z } a times , b , . . . , b | {z } b times , c , . . . , c | {z } c times . This encoding will appear in antecedents of
CommACT ω sequents, thus, it isconsidered as a multiset. This keeps the numbers of a ’s, b ’s, and c ’s, which iscrucial for representing Minsky configurations.9ach instruction I of M is encoded by a specific formula A I . For inc , theencoding is straightforward: A inc ( p,r,q ) = p ⊸ ( q · r ) . For jzdec , the encoding is more involved. We introduce two extra variables, z a and z b , and encode jzdec ( p, r, q , q ) by the following formula: A jzdec ( p,r,q ,q ) = (( p · r ) ⊸ q ) ∧ ( p ⊸ ( q ∨ z r )) . Moreover, we introduce three extra formulae, N a = z a ⊸ z a , N b = z b ⊸ z b ,and N c = z c ⊸ z c .Let us explain the informal idea behind this encoding. Suppose that wewish to model n steps of execution. In our derivations, formulae of the form A I are going to appear in left-hand sides of sequents (along with the code of theconfiguration), instantiated using Kleene star (we consider the derivation of the n -th premise of ∗ L ). For inc , when the formula A inc ( p,r,q ) gets introduced by ⊸ L , we replace p with q · r (looking from bottom to top). This corresponds tochanging the state from p to q and increasing register r .For jzdec , we use additive connectives, ∧ and ∨ . Being in the negativeposition (in the left-hand side of the sequent), ∧ implements choice and ∨ im-plements branching (parallel computations). In jzdec , the choice is as follows.If there is at least one copy of variable r (i.e., the value of register r is not zero),we can choose ( p · r ) ⊸ q which changes the state from p to q and decreases r .We could also choose p ⊸ ( q ∨ z r ), for the zero case. This operation continuesthe main execution thread by changing to state q , but also forks a new threadwith a “state” z r . This new thread is designed to check whether r is actuallyzero. Since the thread was forked in the middle of the execution, say, after k steps, it still has to perform ( n − k ) steps of execution. They get replaced bydummy instructions, encoded by N r = z r ⊸ z r .The set of instructions (including “dummies”) is encoded by the formula E = N a ∧ N b ∧ N c ∧ ^ I A I , which is going to be copied using Kleene star.The key feature of our encoding is the right-hand side of the sequent, whichis going to be D = (cid:0) a ∗ · b ∗ · c ∗ · _ q ∈ Q q (cid:1) ∨ ( b ∗ · c ∗ · z a ) ∨ ( a ∗ · c ∗ · z b ) ∨ ( a ∗ · b ∗ · z c ) . This formula represents constraints on the configuration after performing n stepsof computation. For the main execution thread, it just says that it should reacha correctly encoded configuration of the form h q, a, b, c i , q ∈ Q , a, b, c ∈ N . Forzero-checking thread, with “state” z r , D enforces the value of register r to bezero.In the next subsection, we formulate and prove a theorem which establishesa correspondence between Minsky computations and derivations of specific se-quents in CommACT ω . 10 .3 Computations and Derivations Theorem 6.
Minsky machine M runs forever on input x if and only if thesequent E ∗ , q S , a x ⊢ D ( ∗ ) is derivable in CommACT ω . Therefore, CommACT ω is Π -hard. The proof of this theorem is based on the following lemma.
Lemma 1.
Minsky machine M can perform k steps of execution starting fromconfiguration h p, a, b, c i if and only if the sequent E k , p, a a , b b , c c ⊢ D . Indeed, let p = q S , a = x , and b = c = 0. Then E ∗ , q S , a x ⊢ D is derivablefrom (cid:0) E n , q S , a x ⊢ D (cid:1) ∞ n =0 by ∗ L , and the opposite implication is by cut with E n ⊢ E ∗ . Thus, derivability of ( ∗ ) is equivalent to the fact that M can performarbitrarily many steps starting from h q S , x, , i . Since M is deterministic, thisis equivalent to infinite run. Proof of Lemma 1.
The “only if ” part, from computation to derivation, iseasier. We proceed by induction on k . In the base case, k = 0, we derive thenecessary sequent, p, a a , b b , c c ⊢ D by ∨ R (twice) from p, a a , b b , c c ⊢ a ∗ · b ∗ · c ∗ · W q ∈ Q q . The latter is derived using ∗ R , ∨ R , and · R .For the induction step, consider the first M ’s instruction executed. If it is inc ( p, a , q ), we perform the following derivation: p ⊢ p E k − , q, a a +1 , b b , c c ⊢ DE k − , q · a , a a , b b , c c ⊢ D · LE k − , A inc ( p, a ,q ) , p, a a , b b , c c ⊢ D ⊸ L ( A inc ( p, a ,q ) = p ⊸ ( q · a )) E k , p, a a , b b , c c ⊢ D ∧ L Here and further double horizontal line means several applications of a rule.The topmost sequent E k − , q, a a +1 , b b , c c ⊢ D is derivable by inductive hy-pothesis, since M can perform k − h q, a + 1 , b, c i . Instructions inc ( p, b , q ) and inc ( p, c , q ) are consid-ered similarly.For jzdec ( p, a , q , q ), we consider two cases. If a = 0, then the derivationis similar to the one for inc : p ⊢ p a ⊢ a p, a ⊢ p · a · R E k − , q , a a − , b b , c c ⊢ DE k − , ( p · a ) ⊸ q , p, a a , b b , c c ⊢ D ⊸ LE k − , A jzdec ( p, a ,q ,q ) , p, a a , b b , c c ⊢ D ∧ LE k , p, a a , b b , c c ⊢ D ∧ L Here E k − , q , a a − , b b , c c ⊢ D is derivable by the induction hypothesis.11he interesting part is the zero test. Let a = 0 and perform the followingderivation: p ⊢ p E k − , q , b b , c c ⊢ D E k − , z a , b b , c c ⊢ DE k − , q ∨ z a , b b , c c , ⊢ D ∨ LE k − , p ⊸ ( q ∨ z a ) , p, b b , c c ⊢ D ⊸ LE k − , A jzdec ( p, a ,q ,q ) , p, b b , c c ⊢ D ∧ LE k , p, b b , c c ⊢ D ∧ L On the left branch, we have E k − , q , b b , c c ⊢ D , which is derivable by theinduction hypothesis: h q , , b, c i is the successor for h p, , b, c i after applying jzdec ( p, a , q , q ).The sequent on the right branch, E k − , z a , b b , c c ⊢ D , can be derived using ∧ L and ∨ R from ( z a ⊸ z a ) k − , z a , b b , c c ⊢ b ∗ · c ∗ · z a . Indeed, E is a conjunctionwhich includes N a = z a ⊸ z a , and D is a disjunction which includes b ∗ · c ∗ · z a .The latter sequent, ( z a ⊸ z a ) k − , z a , b b , c c ⊢ b ∗ · c ∗ · z a , is derivable.The cases of jzdec ( p, b , q , q ) and jzdec ( p, c , q , q ) are similar.For the “if ” part we analyze the cut-free derivation of E k , p, a a , b b , c c ⊢ D . Itis important to notice that this derivation does not necessarily directly representa k -step workflow of M as shown above. Example . Let M include the following instructions: inc ( p, a , q ) and jzdec ( q, a , p, p ),and consider a 4-step execution of M starting from h p, , , i . Such an execu-tion is indeed possible, and can be represented by the following “canonical”derivation: p ⊢ p q, a ⊢ q · a p ⊢ D ... E , p ⊢ DE , ( q · a ) ⊸ p, q, a ⊢ D ⊸ LE , q, a ⊢ D ∧ LE , p ⊸ ( q · a ) , p ⊢ D · L, ⊸ LE , p ⊢ D ∧ L However, there is also an alternative derivation: p ⊢ p q, a ⊢ q · a p ⊢ p ( q · a ) ⊸ p, a , q ⊢ p ⊸ LE, q, a ⊢ p ∧ LE, p ⊸ ( q · a ) , p ⊢ p · L, ⊸ LE , p ⊢ p ∧ L q, a ⊢ q · a p ⊢ D ( q · a ) ⊸ p, q, a ⊢ D ⊸ LE, q, a ⊢ D ∧ LE, q · a ⊢ D · LE , p ⊸ ( q · a ) , p ⊢ D ⊸ LE , p ⊢ D ∧ L In this derivation, there is a “subroutine” (the left subtree) which movesfrom h p, , , i to h p, , , i in 2 steps.12n general, such “subroutines” could be represented by subderivations forsequents of the form E m , q, a a ′ , b b ′ , c c ′ ⊢ p (while in the “canonical” derivationthey are all trivialized to p ⊢ p ). This corresponds to h q, a ′ , b ′ , c ′ i → h p, , , i in m steps. The crucial observation, however, is that in such “subroutines” jzdec cannot branch to the zero ( r = 0) case. The reason is that in the subtree thereis no D which supports the usage of z r . Therefore, such a “subroutine” alsovalidates the transition h q, a ′ + a, b ′ + b, c ′ + c i → h p, a, b, c i for arbitrary a, b, c .This allows connecting the “subroutine” to the main execution workflow.The idea described above is formalized in the usual boring way, provingseveral statements by joint induction. Let e E i denote any formula in the con-junction E or a conjunction of such formulae (in particular, e E i could be E itself). For convenience, let R = { a , b , c } , Z = { z a , z b , r c } , and Z ¯ r = Z − { z r } (e.g., Z ¯ b = { z a , z c } ).1. Sequents of the form e E , . . . , e E k , a a , b b , c c ⊢ t , where t ∈ Q ∪ Z , are neverderivable, neither are sequents of the form e E , . . . , e E k , a a , b b , c c ⊢ t · r ,where r ∈ R .2. Sequents of the form e E , . . . , e E k , z r , a a , b b , c c ⊢ t , where r ∈ R and t ∈ Q ∪ Z ¯ r , are never derivable, neither are sequents of the form e E , . . . , e E k , z r , a a , b b , c c ⊢ t · r ′ , where r, r ′ ∈ R and t ∈ Q ∪ Z ¯ r .3. If e E , . . . , e E k , z a , a a , b b , c c ⊢ D is derivable, then a = 0. Similarly for b and c .4. If e E , . . . , e E k , q, a a ′ , b b ′ , c c ′ ⊢ p is derivable, where p, q ∈ Q , then M canmove from h q, a ′ + a, b ′ + b, c ′ + c i to h p, a, b, c i in k steps for any a, b, c .5. If e E , . . . , e E k , q, a a ′ , b b ′ , c c ′ ⊢ p · a , where p, q ∈ Q , is derivable, then M can move from h q, a ′ + a, b ′ + b, c ′ + c i to h p, a + 1 , b, c i in k steps for any a, b, c . Similarly for a , b , c .6. If e E , . . . , e E k , p, a a , b b , c c ⊢ D is derivable ( p ∈ Q ), then M can perform k steps, starting from h p, a, b, c i .Statement 6 with e E = . . . = e E k = E yields our goal (the “if” part ofLemma 1).Rules ∨ L and · L are invertible (this can be established using cut), so we cansuppose that in our derivations they are always applied immediately.Next, we reorganize the derivation so that no right rule ( ∨ R , · R , or ∗ R )appears below a left rule ( ∧ L , ∨ L , ⊸ L , · L ). Such a reorganization is possiblesince ⊸ R is never applied (there are no formulae of the form F ⊸ G insuccedents). For example, · R and ∨ L are exchanged in the following way: Π ′ , E, Π ′′ ⊢ A Π ′ , F, Π ′′ ⊢ A Π ′ , E ∨ F, Π ′′ ⊢ A ∨ L ∆ ⊢ B Π ′ , E ∨ F, Π ′′ , ∆ ⊢ A · B · R Π ′ , E, Π ′′ ⊢ A ∆ ⊢ B Π ′ , E, Π ′′ , ∆ ⊢ A · B · R Π ′ , F, Π ′′ ⊢ A ∆ ⊢ B Π ′ , F, Π ′′ , ∆ ⊢ A · B · R Π ′ , E ∨ F, Π ′′ , ∆ ⊢ A · B ∨ L Transformations in other are similar.Now let us prove our statements by joint induction on k . The base cases( k = 0) are considered as follows. For statements 1 and 2, we have a “lonely” t in the succedent, which could not be matched with another t to form anaxiom. Thus, the sequents are not derivable. In statement 3, when deriving z a , a a , b b , c c ⊢ D , we have to choose b ∗ · c ∗ · z a from D (otherwise z a does nothave a match). Therefore, a = 0, since there are no occurrences of a in thesuccedent. In statement 4, the sequent should be of the form p ⊢ p , that is, q = p and a ′ = b ′ = c ′ = 0. The 0-step move from h p, a, b, c i to h p, a, b, c i istrivial. Similarly, for statement 5, we have exactly p, a ⊢ p · a , thus, q = p , a ′ = 1, and b ′ = c ′ = 0 ′ , and a 0-step move from h p, a + 1 , b, c i to h p, a + 1 , b, c i .Finally, the base case for statement 6 is obvious, since performing 0 steps isalways possible.Now let k = 0. If the lowermost rule applied in our derivation is ∧ L , thenit just changes one of the e E i ’s to a formula of the same form (formally, herewe use a nested induction on derivation height). Thus, the interesting case is ⊸ L , when one of the e E i ’s is of the form F ⊸ G , and it gets decomposed.(Notice that from A jzdec ( p,r,q ,q ) we have already taken, or “chosen,” only oneconjunct.) Statement 1.
We have e E i = F ⊸ G , and the left premise of ⊸ L is again ofthe form e E , . . . , e E k ′ , a a ′ , b b ′ , c c ′ ⊢ F , where k ′ < k and F is either t ′ or t ′ · r ′ , t ′ ∈ Q ∪ Z , r ′ ∈ R . The latter is due to the form of conjuncts in E . Since k ′ < k ,we can apply the induction hypothesis and conclude that the left premise is notderivable. Statement 2.
The occurrence of z r should go to the left premise of ⊸ L ,otherwise we face contradiction with statement 1. Consider two cases. If e E i = F ⊸ G = z r ⊸ z r (with the same r ), then the right premise of ⊸ L is againof the same form, as the goal sequent, but with a smaller k , and we proceedby induction. Otherwise, F is of the form t ′ or t ′ · r ′′ , where t ′ ∈ Q ∪ Z ¯ r and r ′′ ∈ R . In this case we apply the induction hypothesis to the left premise. Statement 3.
Again, by statement 1 z a should go to the left premise. If F = z a , then derivability of the left premise contradicts statement 2. Thus, e E i = z a ⊸ z a , and we apply the induction hypothesis to the right premise. Statement 4.
Again, q should go to the left premise. If e E i = z r ⊸ z r , thenthe right premise is of the form e E , . . . , e E k ′ , z r , a a ′′ , b b ′′ , c c ′′ ⊢ p and could notbe derivable by statement 2. Thus, three cases remain possible; for simplicity,let r = a , the cases of r = b and r = c are handled in the same way. • e E i = A inc ( p ′ ,r,q ′ ) = p ′ ⊸ ( q ′ · a ). We have the following application of14 L and an immediate application of · L : e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ e E i +1 , . . . , e E k , q ′ , a a +1 , b b , c c ⊢ p e E i +1 , . . . , e E k , q ′ · a , a a , b b , c c ⊢ p · L e E , . . . , e E k , q, a a ′ , b b ′ , c c ′ ⊢ p ⊸ L Here a + a = a ′ , b + b = b ′ , c + c = c ′ . By induction hypothesis, M can move from h q, a + a + a, b + b + b, c + c + c i to h p ′ , a + a, b + b, c + c i in i − inc ( p ′ , r, q ′ ) and change the configuration to h q ′ , a +1+ a, b + b, c + c i . Finally, we move to h p, a, b, c i in k − i steps againby induction hypothesis. The total number of steps is ( i − k − i ) = k . • e E i = ( p ′ · a ) ⊸ q ′ , the first part of A jzdec ( p ′ ,r,q ,q ′ ) . e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ · a e E i +1 , . . . , e E k , q ′ , a a , b b , c c ⊢ p e E , . . . , e E k , q, a a ′ , b b ′ , c c ′ ⊢ p ⊸ L Applying statement 5 to the left premise yields that M can move from h q, a + a + a, b + b + b, c + c + c i to h p ′ , a + a + 1 , b + b, c + c i in i − a + a + 1 >
0, applying jzdec ( p ′ , r, q , q ′ ) changes theconfiguration to h q ′ , a + a , b + b, c + c i . Finally, by induction hypothesis(statement 5) applied to the left premise, we reach h p, a, b, c i in k − i steps. • e E i = p ′ ⊸ ( q ∨ z a ), the second part of jzdec ( p ′ , r, q , q ′ ). Taking r = a , we get the following application of ⊸ L , preceded by an immediateapplication of ∨ L : e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ e E i +1 , . . . , e E k , z a , a a , b b , c c ⊢ p . . . , q ′ , . . . ⊢ p e E i +1 , . . . , e E k , q ′ ∨ z a , a a , b b , c c ⊢ p ∨ L e E , . . . , e E k , q, a a ′ , b b ′ , c c ′ ⊢ p ⊸ L One of the premises of ∨ L , namely, e E i +1 , . . . , e E k , z a , a a , b b , c c ⊢ p , isnot derivable by statement 3 (induction hypothesis). Statement 5 is established in the same way as statement 4, with a routinechange of p to p · a and a to a + 1. Statement 6.
The difference from the previous two statements is that here,on the main thread of derivation, the second case of jzdec can realized.As for previous statements, q should go to the left premise and e E i couldnot be z r ⊸ z r (due to statements 1 and 2). Let r = a and consider the threepossible cases. • e E i = A inc ( p ′ ,r,q ′ ) = p ′ ⊸ ( q ′ · a ). e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ e E i +1 , . . . , e E k , q ′ , a a +1 , b b , c c ⊢ D e E i +1 , . . . , e E k , q ′ · a , a a , b b , c c ⊢ D · L e E , . . . , e E k , p, a a , b b , c c ⊢ D ⊸ L
15y induction hypothesis, statement 4, M can move from h q, a, b, c i , whichis h q, a + a , b + b , c + c i , to h p ′ , a , b , c i in i − inc ( p ′ , r, q ′ ) yields h q ′ , a + 1 , b , c i . Finally, by induction hypothesis,statement 6, applied to the right premise, M can perform k − i more steps.The total number of steps performed starting from h q, a, b, c i equals k . • e E i = ( p ′ · a ) ⊸ q ′ , the first part of A jzdec ( p ′ , a ,q ,q ′ ) . e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ · a e E i +1 , . . . , e E k , q ′ , a a , b b , c c ⊢ D e E , . . . , e E k , p, a a , b b , c c ⊢ D ⊸ L By induction hypothesis, statement 5, M can move from h q, a, b, c i = h q, a + a , b + b , c + c i to h p ′ , a +1 , b , c i in i − a +1 > jzdec ( p ′ , r, q , q ′ ) changes h p ′ , a + 1 , b , c i to h q ′ , a , b , c i . Fi-nally, applying statement 6 (induction hypothesis), we perform the re-maining k − i steps. • e E i = p ′ ⊸ ( q ∨ z a ), the second part of A jzdec ( p ′ , a ,q ,q ′ ) . The goal se-quent e E , . . . , e E k , p, a a , b b , c c ⊢ D is derived, using ⊸ L , from sequents e E , . . . , e E i − , q, a a , b b , c c ⊢ p ′ and e E i +1 , . . . , e E k , q ∨ z a , a a , b b , c c ⊢ D ,where the latter is derived by an immediate application of ∨ L : e E i +1 , . . . , e E k , q , a a , b b , c c ⊢ D e E i +1 , . . . , e E k , z a , a a , b b , c c ⊢ D e E i +1 , . . . , e E k , q ∨ z a , a a , b b , c c ⊢ D ∨ L Here, again, a = a + a , b = b + b , c = c + c . Moreover, derivability of e E i +1 , . . . , e E k , z a , a a , b b , c c ⊢ D implies a = 0 by statement 3 (inductionhypothesis).Now by statement 4 (induction hypothesis) applied to the left premisewe conclude that M can move from h q, a, b, c i = h q, a , b + b , c + c i to h p ′ , , b , c i in i − h p ′ , , b , c i the value of a is zero, so applying jzdec ( p ′ , a , q , q ′ ) changes the configuration to h q , , b , c i . Finally, ap-plying statement 6 (induction hypothesis) to e E i +1 , . . . , e E k , q , b b , c c ⊢ D shows that M can perform the remaining k − i steps, starting from h q , , b , c i .This finishes the proof of Lemma 1. Σ -Completeness of CommACT We start with a reformulation of infinitary commutative action logic,
CommACT ω ,as a calculus with non-well-founded derivations (instead of the ω -rule), a commu-tative variant of the system introduced by Das and Pous [3]. This new calculus,16enoted by CommACT ∞ , is obtained from CommACT ω by replacing ∗ L ω and ∗ R n with the following rules:Γ ⊢ C Γ , A ∗ , A ⊢ C Γ , A ∗ ⊢ C ∗ L ⊢ A ∗ ∗ R Γ ⊢ A ∆ ⊢ A ∗ Γ , ∆ ⊢ A ∗ ∗ R Unlike
CommACT ω , proofs in CommACT ∞ can be non-well-founded, thatis, include infinite branches. These infinite branches should obey the following correctness condition: on each such branch there should be a trace of a formulaof the form A ∗ , which undergoes ∗ L infinitely many times.Equivalence between CommACT ∞ and CommACT ω can be proved inthe same way as in the non-commutative case [3]. We omit this proof, because CommACT ∞ is not formally used in our complexity arguments; we rather useit to clarify the ideas behind them.Reformulation of CommACT ω as CommACT ∞ makes the constructionof the previous section more straightforward: an infinite execution of M isrepresented by an infinite derivation of ( ∗ ) .Example . Let M include the following instruction: inc ( q S , a , q S ). This ma-chine runs infinitely on any input x , and this is represented by the followinginfinite derivation of ( ∗ ) in CommACT ∞ : q S , a x ⊢ D q S ⊢ q S ... E ∗ , q S , a x +2 ⊢ D ... E ∗ , q S , a x +1 ⊢ DE ∗ , q S · a , a x ⊢ D · LE ∗ , q S ⊸ ( q S · a ) , q S , a x ⊢ D ⊸ LE ∗ , E, q S , a x ⊢ D ∧ LE ∗ , q S , a x ⊢ D ∗ L Example . The zero-check in jzdec instantiates an auxiliary infinite brancheach time it gets invoked. For example, the infinite run of a machine with jzdec ( q S , a , q S , q S ) on input 0 induces the following derivation of ( ∗ ) with in-finitely many infinite branches: q S ⊢ D q S ⊢ q S z a ⊢ D z a ⊢ z a ... E ∗ , z a ⊢ DE ∗ , z a ⊸ z a , z a ⊢ D ⊸ LE ∗ , E, z a ⊢ D ∧ LE ∗ , z a ⊢ D ∗ L ... E ∗ , z a ⊢ D ... E ∗ , q S ⊢ DE ∗ , q S ∨ z a ⊢ D ∨ L ... E ∗ , q S ⊢ DE ∗ , q S ∨ z a ⊢ D ∨ LE ∗ , q S ⊸ ( q S ∨ z a ) , q S ⊢ D ⊸ LE ∗ , E, q S ⊢ D ∧ LE ∗ , q S ⊢ D ∗ L circular ones. Definition.
Minsky machine M runs circularly on input x , if its executionvisits one configuration h p, a, b, c i twice (and, due to determinism, infinitelymany times, since the sequence of configurations becomes periodic).The key idea is that circular behaviour is represented by circular proofs of( ∗ ) in CommACT ∞ : q S , a x ⊢ D p, a a , b b , c c ⊢ D E ∗ , p, a a , b b , c c ⊢ D ... E ∗ , E, p, a a , b b , c c ⊢ DE ∗ , p, a a , b b , c c ⊢ D ∗ L ... E ∗ , E, q S , a x ⊢ DE ∗ , q S , a x ⊢ D ∗ L Here the main infinite branch of our derivation returns to the same sequent, E ∗ , p, a a , b b , c c ⊢ D , and we may replace further development of the infinitebranch by a backlink to the earlier occurrence of this sequent. Using this back-link, the circular proof can be unravelled into an infinite one. The correctnesscondition is guaranteed by the ∗ L rule just above E ∗ , p, a a , b b , c c ⊢ D : afterunravelling, E ∗ will undergo ∗ L infinitely often.In fact, circular proofs in CommACT ∞ yield sequents derivable in the nar-rower logic CommACT (as defined in Section 2). This can be shown by a com-mutative modification of the corresponding argument from [3]. In this article,however, we perform this translation explicitly for concrete circular derivationsused for encoding circular computations. (This is done for simplicity, in orderto avoid considering complicated circular proofs with entangled backlinks.)A similar idea was used to prove undeciability in the non-commutative case,for
ACT [9]. In the commutative situation it is even more straightforward,since Minsky computation here is represented directly, without a detour throughtotality of context-free grammars [1, 9].Unfortunately, the translation from circular computations to circular proofsworks only in one direction.
Example . Let M include the following instruction: inc ( q S , a , q S ). Then theinfinite run of M , being not a circular one, can be represented by a circularproof of ( ∗ ) in CommACT ∞ . The “canonical” proof (see Example 2), indeed,18s not circular, but there exists an alternative circular one: q S ⊢ a ∗ · q S q S ⊢ q S E ∗ , q S ⊢ a ∗ · q S a ⊢ a E ∗ , q S , a ⊢ a · ( a ∗ · q S ) · R a · ( a ∗ · q S ) ⊢ a ∗ · q S E ∗ , q S , a ⊢ a ∗ · q S CutE ∗ , q S ⊸ ( q S · a ) , q S ⊢ a ∗ · q S · L, ⊸ LE ∗ , E, q S ⊢ a ∗ · q S ∧ LE ∗ , q S ⊢ a ∗ · q S ∗ L a ∗ · q S ⊢ DE ∗ , q S ⊢ D Cut where derivations of q S ⊢ a ∗ · q S , a ∗ · q S ⊢ D , and a · ( a ∗ · q S ) ⊢ a ∗ · q S are obvious.Thus, we cannot just say “ M runs circularly on x if and only if ( ∗ ) isderivable in CommACT , and therefore
CommACT is undecidable.” The “if”direction fails. We prove Σ -completeness (and, in particular, undecidability)of CommACT using an indirect technique of effective inseparability, which wedevelop in the next subsection. This technique is basically the same as in thenon-commutative case [11], but the use of Minsky machines instead of Turingones requires some minor modifications.
The material of this subsection is not new, but rather classical. We use the sametechniques as in the non-commutative case [11]; see also [20, 23]. However, weaccurately represent these techniques here, because 3-counter machines are quitea restrictive computational model, so we have to ensure that all the constructionswork for them as well as for more elaborate computational models, such asTuring machines.Let C = {hM , x i | M runs circularly on x } ; H = {hM , x i | M halts on x } ; H = {hM , x i | M does not halt on x } . Obviously,
C ⊂ H and
H ∩ H = ∅ .In this subsection we are going to show that C and H are inseparable: there is no decidable set K such that C ⊆ K ⊆ H . Moreover, we shall es-tablish a stronger property of effective inseparability, from which it will follow,by Myhill’s theorem [17], that if K is r.e. and C ⊆ K ⊆ H , then K is Σ -complete. We shall use this result for the set K ( CommACT ) = {hM , x i | ( ∗ ) is derivable in CommACT } in order to prove that CommACT is Σ -complete.We tacitly suppose that each Minsky machine M is encoded by a naturalnumber p M q in an injective and computable way. We also fix the followingencoding of pairs of natural numbers, bijective and computable:[ x, y ] = ( x + y )( x + y + 1)2 + x. C = { [ p M q , x ] | M runs circularly on x } is a set ofnatural numbers, and similarly for H and H .Let W u , where u is a natural number, be “the u -th r.e. set.” Formally, if u = p M q for some M , then W u is the domain of the function computed by M ,that is, W p M q = D M ; if u does not encode any Minsky machine, let W u = ∅ .Now let us define the main notion in our construction, effective inseparability. Definition.
Two disjoint sets A and B of natural numbers are effectively insep-arable, if there exists a computable function f : N × N → N , such that if W u ⊇ A , W v ⊇ B , and W u ∩ W v = ∅ , then f ( u, v ) is defined and f ( u, v ) / ∈ W u ∪ W v .Effective inseparability yields recursive inseparability, in the following sense: Proposition 2. If A and B are effectively inseparable, then there is no suchdecidable K that A ⊆ K and K ∩ B = ∅ .Proof. Indeed, if K is decidable, then both K and K = N − K are r.e. Therefore, K = W u and K = W v for some u, v . We have W u ⊇ A , W v ⊇ B , and W u ∩ W v = ∅ , but f ( u, v ) / ∈ W u ∪ W v is impossible, since W u ∪ W v = N .Effective inseparability is closely related to the notion of creative sets, forwhich Myhill’s theorem holds. Definition.
A set K of natural numbers is creative, if K is r.e. and there existsa computable function h such that if W u is disjoint with A , then h ( u ) is definedand h ( u ) / ∈ W u ∪ K . Theorem 7 (J. Myhill 1955) . If K is creative, then any r.e. set D is m-reducibleto K . In other words, any creative set is Σ -complete. [17]Using Myhill’s theorem, we prove the result we shall need: Theorem 8.
Let A , B , and K be three r.e. sets of natural numbers such that A and B are effectively inseparable, A ⊆ K , and K ∩ B = ∅ . Then K is Σ -complete.Proof. We show that K is creative, and then use Myhill’s theorem. Since K isr.e., K = W v for some v . For any r.e. set W u , which is disjoint with K , the set W u ∪ B is also r.e. and disjoint with K . Moreover, this transformation is effective,i.e., there exists a total computable function g such that W u ∪ B = W g ( u ) .Let us define h ( u ) = f ( v, g ( u )), where f is taken from the definition ofeffective inseparability of A and B . Since W v = K ⊇ A , W g ( u ) = W u ∪ B ⊇ B ,and W v ∩ W g ( u ) = ( K ∩ W u ) ∪ ( K ∩ B ) = ∅ , we have h ( u ) = f ( v, g ( u )) / ∈ W v ∪ W g ( u ) = K ∪ W u ∪ B . Hence, h ( u ) / ∈ W u ∪ K .This means that K is creative, and therefore it is Σ -complete by Myhill’stheorem.Finally, we show effective inseparability of circular behaviour and halting forMinsky machines: 20 heorem 9. C and H are effectively inseparable. In order to prove Theorem 9, we shall need the following technical hardcodinglemma:
Lemma 2.
For any Minsky machine M and any natural number x there existsanother Minsky machine M x , such that if M computes w f ( w ) then M x computes y f ([ x, y ]) . Moreover, the function x p M x q is computable.Proof. The new machine M x is constructed as follows. First we apply thenecessary number of inc ’s in order to transform y to x + y . Now we are in somestate q with x + y in a . Second, we include a concrete Minsky machine whichcomputes z z ( z + 1) /
2. Now we are in another state q (the final state ofthis machine) with ( x + y )( x + y + 1) / a . Now we again apply inc ’s to add x , yielding [ x, y ] = ( x + y )( x + y + 1) / x . Finally, we start M .The dependence on x here is simple: just the number of inc ’s in two placesin the instruction set. This is clearly computable. Proof of Theorem 9.
The pairing function is bijective, so we can suppose thatany natural number is of the form [[ u, v ] , w ]. Let us construct a computablefunction F , defined on “triples” of the form [[ u, v ] , w ]. This function will havethe following properties, provided W u ∩ W v = ∅ . (If this prerequisite does nothold, the behaviour of F can be arbitrary.)1. If [ w, w ] ∈ W u , then F ([[ u, v ] , w ]) is defined and equal to 0.2. If [ w, w ] ∈ W v , then F ([[ u, v ] , w ]) is defined and equal to 1.The informal description of the algorithm for F is as follows. It tries (usingthe universal algorithm) to execute Minsky machines with codes u and v inparallel, on the same input [ w, w ]. If one of them halts, then [ w, w ] belongs to W u or W v respectively, and we yield the corresponding answer. (Otherwise ouralgorithm runs forever, and F is undefined. Also, if u or v fails to be a validcode of a Minsky machine, we suppose that its “execution” also never stops,thus yielding emptiness of W u or W v respectively.)Since F is computable, it is computed by some Minsky machine (Proposi-tion 1), with the final state q F . Let us extend this Minsky machine with thefollowing instructions: jzdec ( q F , a , q F ′ , p ) and jzdec ( p, a , p, p ) , where states p and q F ′ are new and q F ′ is the new final state.Denote the new machine by N . Informally, N does the following: if [ w, w ] ∈ W u , then N halts on w ; if [ w, w ] ∈ W v , then N runs circularly on w (it getsstuck in h p, , , i ).By Lemma 2 we can hardcode the first component of the input, [ u, v ],and obtain a computable function g : ( u, v ) p N [ u,v ] q . Finally, let f ( u, v ) =[ g ( u, v ) , g ( u, v )] = [ p N [ u,v ] q , p N [ u,v ] q ]. 21his f is clearly computable. Now let us show that if W u ⊇ C , W v ⊇ H , and W u ∩ W v = ∅ , then f ( u, v ) / ∈ W u ∪ W v . Indeed, if f ( u, v ) = [ p N [ u,v ] q , p N [ u,v ] q ] ∈ W u , then N [ u,v ] halts on input p N [ u,v ] q . This means that [ p N [ u,v ] q , p N [ u,v ] q ] ∈H , but H is a subset of W v and therefore is disjoint with W u . Dually, if f ( u, v ) ∈ W v , then N [ u,v ] runs circularly on p N [ u,v ] q , that is, [ p N [ u,v ] q , p N [ u,v ] q ] ∈ C . Now C is a subset of W u , and therefore is disjoint with W v . Contradiction.This finishes the construction of the function needed to establish effectiveinseparability of C and H . Σ -Completeness of CommACT Now we formally establish a one-way encoding theorem, from circular Minskycomputations to derivability in
CommACT . Theorem 10. If M runs circularly on input x , then the sequent E ∗ , q S , a x ⊢ D ( ∗ ) is derivable in CommACT .Proof.
We start with deriving the “zero-checking” sequents E ∗ , z a , b b , c c ⊢ D ,and ditto for b and c . The circular derivation is as follows: z a , b b , c c ⊢ b ∗ · c ∗ · z a z a , b b , c c ⊢ D ∨ R z a ⊢ z a E ∗ , z a , b b , c c ⊢ DE ∗ , z a ⊸ z a , z a , b b , c c ⊢ D ⊸ LE ∗ , E, z a , b b , c c ⊢ D ∧ LE ∗ , z a , b b , c c ⊢ D ∗ L and this is how it gets translated into the calculus for CommACT presentedin Section 2: z a , b b , c c ⊢ b ∗ · a ∗ · z a z a , b b , c c ⊸ D ∨ R ⊢ ( z a · b b · c c ) ⊸ D · L, ⊸ R z a ⊢ z a z a , b b , c c ⊢ z a · b b · c c z a ⊸ z a , z a , b b , c c ⊢ z a · b b · c c ⊸ L D ⊢ Dz a ⊸ z a , z a , b b , c c , ( z a · b b · c c ) ⊸ D ⊢ D ⊸ LE, z a , b b , c c , ( z a · b b · c c ) ⊸ D ⊢ D ∧ LE, ( z a · b b · c c ) ⊸ D ⊢ ( z a · b b · c c ) ⊸ D · L, ⊸ RE ∗ ⊢ ( z a · b b · c c ) ⊸ D ∗ L ind E ∗ , z a , b b , c c ⊢ D ⊸ R inv (Here and further ⊸ R inv is the inversion of a series of ⊸ R applications, whichis established by cut with z a , b b , c c , ( z a · b b · c c ) ⊸ D ⊢ D .)Next, we produce a circular derivation of ( ∗ ) in the following way. Eachstep of M ’s execution, h p, a, b, c i → h q, a ′ , b ′ , c ′ i , can be represented as a sub-derivation with E ∗ , p, a a , b b , c c ⊢ D as the goal and E ∗ , q, a a ′ , b b ′ , c c ′ ⊢ D as ahypothesis. Moreover, the lowermost rule in this derivation is ∗ L , which makesthe correctness condition valid. 22he construction is basically the same as in the proof of Lemma 1. For inc ( p, a , q ) we have p, a a , b b , c c ⊢ D p ⊢ p E ∗ , q, a a +1 , b b , c c ⊢ DE ∗ , q · a , a a , b b , c c ⊢ D · LE ∗ , p ⊸ ( q · a ) , p, a a , b b , c c ⊢ D ⊸ LE ∗ , E, p, a a , b b , c c ⊢ D ∧ LE ∗ , p, a a , b c , c c ⊢ D ∗ L For jzdec ( p, a , q , q ) and a = 0, p, a a , b b , c c ⊢ D p ⊢ p a ⊢ a p, a ⊢ p · a · R E ∗ , q , a a − , b b , c c ⊢ DE ∗ , ( p · a ) ⊸ q , p, a a , b b , c c ⊢ D ⊸ LE ∗ , E, p, a a , b b , c c ⊢ D ∧ LE ∗ , p, a a , b c , c c ⊢ D ∗ L Finally, for jzdec ( p, a , q , q ) and a = 0 we have p, b b , c c ⊢ D p ⊢ p E ∗ , q , b b , c c ⊢ D E ∗ , z a , b b , c c ⊢ DE ∗ , q ∨ z a , b b , c c ⊢ D ∨ LE ∗ , p ⊸ ( q ∨ z a ) , p, b b , c c ⊢ D ⊸ LE ∗ , E, p, b c , c c ⊢ D ∧ LE ∗ , p, b c , c c ⊢ D ∗ L Here a a , b b , c c , p ⊢ D (in particular, b b , c c , p ⊢ D ) is easily derivable using ∗ R , · R ,and ∨ R , and derivability of E ∗ , z a , b b , c c ⊢ D in CommACT was establishedearlier.Next, we connect these subderivations in order to represent the infinite runof M . Since this run is circular, a sequent of the form E ∗ , p, a a , b b , c c ⊢ D getsrepeated, and we arrive at a circular proof: q S ⊢ D p, a a , b b , c c ⊢ D E ∗ , p, a a , b b , c c ⊢ D ... E ∗ , E, p, a a , b b , c c ⊢ DE ∗ , p, a a , b b , c c ⊢ D ∗ L ... E ∗ , E, q S ⊢ DE ∗ , q S ⊢ D ∗ L Now we translate this circular proof into
CommACT . Let us first consider23he upper part of this proof, p, a a , b b , c c ⊢ D E ∗ , p, a a , b b , c c ⊢ D ... E ∗ , E, p, a a , b b , c c ⊢ DE ∗ , p, a a , b b , c c ⊢ D ∗ L and translate it into a proof in CommACT using ∗ L ind .Let k be the number of ∗ L applications on the main branch; k ≥
1. First,from this circular derivation we can extract derivations of E i , p, a a , b b , c c ⊢ D for 0 ≤ i < k . Indeed, we replace E ∗ with E i in the goal and then proceedupwards, choosing the right branch of the first i applications of ∗ L (at each step i gets decreased by 1) and the left branch at the ( i + 1)-st one.Second, we derive E k , ( p · a a · b b · c c ) ⊸ D, p, a a , b b , c c ⊢ D . Here we replace E ∗ with E k , ( p · a a · b b · c c ) ⊸ D in the goal sequent, and then always choose theright branch, and in the end k reduces to 0, and we enjoy a derivable sequent( p · a a · b b · c c ) ⊸ D, p, a a , b b , c c ⊢ D instead of the backlinked E ∗ , p, a a , b b , c c ⊢ D .Now we glue everything up. The desired sequent E ∗ , p, a a , b b , c c ⊢ D isobtained by Cut from E ∗ ⊢ ( W k − i =0 E i ) · ( E k ) ∗ and ( W k − i =0 E i ) · ( E k ) ∗ , p, a a , b b , c c ⊢ D . The former is a well-known principle of Kleene algebra, which is derivable in ACT and therefore in
CommACT . The derivation of the latter is presentedbelow, where, for brevity, Γ = p, a a , b b , c c , • Γ = p · a a · b b · c c : . . . E i , Γ ⊢ D . . . W k − i =0 E i , Γ ⊢ D ∨ L ⊢ ( W k − i =0 E i ) ⊸ ( • Γ ⊸ D ) · L, ⊸ R W k − i =0 E i ⊢ W k − i =0 E i E k , Γ , • Γ ⊸ D ⊢ DE k , Γ , W k − i =0 E i , ( W k − i =0 E i ) ⊸ ( • Γ ⊸ D ) ⊢ D ⊸ LE k , ( W k − i =0 E i ) ⊸ ( • Γ ⊸ D ) ⊢ ( W k − i =0 E i ) ⊸ ( • Γ ⊸ D ) · L, ⊸ R ( E k ) ∗ ⊢ ( W k − i =0 E i ) ⊸ ( • Γ ⊸ D ) ∗ L ind W k − i =0 E i , ( E k ) ∗ , Γ ⊢ D ⊸ R inv ( W k − i =0 E i ) · ( E k ) ∗ , Γ ⊢ D · L Now we have a finite, non-circular derivation of our main goal E ∗ , q S ⊢ D ina combined system, which includes both axioms and rules of CommACT (asdefined in Section 2) and the ∗ L rule of CommACT ∞ . We finish our argumentby showing that ∗ L is derivable in CommACT : A ∗ ⊢ ∨ ( A · A ∗ ) Γ ⊢ C Γ , ⊢ C L Γ , A ∗ , A ⊢ C Γ , A · A ∗ ⊢ C · L Γ , ∨ ( A · A ∗ ) ⊢ C ∨ L Γ , A ∗ ⊢ C Cut
Here A ∗ ⊢ ∨ ( A · A ∗ ) is again a principle of Kleene algebra, which is derivablein ACT and therefore in
CommACT .Now we proceed exactly as in the non-commutative case [11]. Let K ( CommACT ω ) = {M | ( ∗ ) is derivable in CommACT ω } ; K ( CommACT ) = {M | ( ∗ ) is derivable in CommACT } C = {M | M runs circularly } ; H = {M | M does not halt } . By Theorem 6 and Theorem 10 we have
C ⊂ K ( CommACT ) ⊂ K ( CommACT ω ) = H . Now Theorem 9 and Theorem 8 immediately yield Σ -completeness of K ( CommACT )and, therefore, of
CommACT itself:
Theorem 11.
The derivability problem for
CommACT is Σ -complete. In this article, we have established Π -completeness of CommACT ∞ and Σ -completeness of CommACT . The former is a commutative counterpart ofresults by Buszkowski and Palka [1, 18]. The latter is a commutative counterpartof an earlier result by the author [9, 11].In fact, as in the non-commutative case, we have established Σ -completenessnot only for CommACT , but for a range of logics: namely, any r.e. logicbetween
CommACT and
CommACT ω is Σ -complete.There are several questions are left for further research:1. The complexity question for CommACT without additive connectives ( ∧ and ∨ ) is open. Notice that additives are crucial for encoding the jzdec instruction; in [15], there is no jzdec , but there are parallel computations,also simulated using ∨ .2. It is an open question whether the same complexity results hold for thevariants of CommACT ω and CommACT with distributivity of ∨ over ∧ added.3. The complexity of the Horn theory for commutative action lattices or evencommutative Kleene algebras is, to the best of the author’s knowledge,unknown. Comparing with Kozen’s result for non-commutative Kleenealgebras [8], we conjecture Π -completeness for the *-continuous case, andthe proof should again use Minsky machines instead of Turing ones.4. It is also interesting to look at the non-associative, but commutative, ver-sion of infinitary action logic. In the non-associative case, it is problematicto define iteration, and it gets replaced with so-called iterative division, that is, compound connectives of the form A ∗ ⊸ B and B ⊸ A ∗ . Theinteresting phenomenon here is that the corresponding non-commutativesystem happens to be algorithmically decidable, at least with the dis-tributivity axiom added [22]. On the other hand, as shown in [12], in theassociative and non-commutative case iterative division is sufficient forΠ -hardness. 25 cknowledgments The author is grateful to the participants of the DaL´ı 2020 online meeting forfruitful discussions, especially on directions of further research.
Financial Support.
The work was supported by the Russian Science Foun-dation, in cooperation with the Austrian Science Fund, under grant RSF–FWF20-41-05002.
References [1] W. Buszkowski. On action logic: equational theories of action algebras.
Journal of Logic and Computation , 17(1):199–217, 2007.[2] W. Buszkowski and E. Palka. Infinitary action logic: complexity, modelsand grammars.
Studia Logica , 89(1):1–18, 2008.[3] A. Das and D. Pous. Non-wellfounded proof theory for (Kleene+action)(algebras+lattices). In , volume 119 of
Leibniz International Proceedings inInformatics (LIPIcs) , pages 19:1–19:18, Dagstuhl, Germany, 2018. SchlossDagstuhl–Leibniz-Zentrum f¨ur Informatik.[4] J.-Y. Girard. Linear logic.
Theoretical Computer Science , 50(1):1–102,1987.[5] T. Hoare, B. M¨oller, G. Struth, and I. Wehrman. Concurrent Kleene al-gebra and its foundations.
Journal of Logic and Algebraic Programming ,80:266–296, 2011.[6] M. Kanazawa. The Lambek calculus enriched with additional connectives.
Journal of Logic, Language, and Information , 1(2):141–171, 1992.[7] D. Kozen. On action algebras. In J. van Eijck and A. Visser, editors,
Logicand Information Flow , pages 78–88. MIT Press, 1994.[8] D. Kozen. On the complexity of reasoning in Kleene algebra.
Informationand Computation , 179:152–162, 2002.[9] S. Kuznetsov. The logic of action lattices is undecidable. In
Proceedings of34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS2019) . IEEE, 2019.[10] S. Kuznetsov. Complexity of commutative infinitary action logic. In
DaL´ı2020: Dynamic Logic. New Trends and Applications , volume 12569 of
Lec-ture Notes in Computer Science , pages 155–169. Springer, 2020.[11] S. Kuznetsov. Action logic is undecidable.
ACM Transactions on Compu-tational Logic , 2021. To appear. 2612] S. L. Kuznetsov and N. S. Ryzhkova. A restricted fragment of the Lam-bek calculus with iteration and intersection operations.
Algebra and Logic ,59(2):190–241, 2020.[13] S. L. Kuznetsov and S. O. Speranski. Infinitary action logic with exponen-tiation, 2020. arXiv preprint 2001.06863.[14] J. Lambek. The mathematics of sentence structure.
American Mathemat-ical Monthly , 65:154–170, 1958.[15] P. Lincoln, J. Mitchell, A. Scedrov, and N. Shankar. Decision problems forpropositional linear logic.
Annals of Pure and Applied Logic , 56(1–3):239–311, 1992.[16] M. L. Minsky. Recursive unsolvability of Post’s problem of “Tag” and othertopics in theory of Turing machines.
Annals of Mathematics , 74(3):437–455, 1961.[17] J. Myhill. Creative sets.
Zeitschrift f¨ur mathematische Logik und Grund-lagen der Mathematik , 1:97–108, 1955.[18] E. Palka. An infinitary sequent system for the equational theory of *-continuous action lattices.
Fundamenta Informaticae , 78(2):295–309, 2007.[19] V. Pratt. Action logic and pure induction. In
JELIA 1990: Logics inAI , volume 478 of
Lecture Notes in Artificial Intelligence , pages 97–120.Springer, 1991.[20] H. Rogers.
Theory of Recursive Functions and Effective Computability .MIT Press, 1987.[21] R. Schroeppel. A two counter machine cannot calculate 2 N . MassachusetsInstitute of Technology A.I. Laboratory, Artificial Intelligence Memo DaL´ı 2019: Dynamic Logic. New Trends and Applications , vol-ume 12005 of