aa r X i v : . [ m a t h . L O ] A p r Complexity of ma jorants
Alexander Shen ∗ April 7, 2020
Abstract
The minimal Kolmogorov complexity of a total computable function thatexceeds everywhere all total computable functions of complexity at most n ,is 2 n + O (1) . If “everywhere” is replaced by “for all sufficiently large inputs”,the answer is n + O (1). The notion of Kolmogorov complexity of computable function was first con-sidered by Schnorr [1]. The (plain) complexity of a computable function is theminimal length of a program that computes this function. As usual, we requirethat the programming language is optimal, i.e., leads to a minimal complexityup to O (1). One can also define the plain complexity of a function as the mini-mal complexity of its programs. In this case we may use any G¨odel numbering ofprograms.Consider all total computable functions that have complexity at most n . AlexeyMilovanov asked the following question: What is a minimal complexity of a totalcomputable function that exceeds all of them ? The words “ f exceeds g ” can beunderstood in different ways. Here are two possibilities: • f exceeds g if f ( n ) > g ( n ) for all n ; • f weakly exceeds g if f ( n ) > g ( n ) for all sufficiently large n .The following simple result answers both questions. Theorem. • The minimal complexity of a total computable function that weakly exceedsall total computable functions of complexity at most n is n + O (1) . ∗ LIRMM, CNRS & Univesity of Montpellier, [email protected] . Supported byRaCAF ANR-15-CE40-0016-01 grant. The minimal complexity of a total computable function that exceeds all totalcomputable functions of complexity at most n is n + O (1) .Proof. The first part is easy. Obviously a function g that weakly exceeds all totalcomputable functions of complexity at most n should have complexity greater than n (since it does not exceed itself). On the other hand, assume that we know thenumber T n of programs of total functions that have length at most n . This numbercan be represented as an ( n + 1)-bit string. Knowing this string (and thereforeknowing n ), we can compute the following function g : to compute g ( k ) , enumerate programs that terminate at all inputs , , . . . , k ; as soon as T n programs with this property are found, outputthe maximal value of these programs on k , plus . The function g is total since there are T n total programs. It weakly exceeds alltotal computable functions of complexity at most n . Indeed, if k is large enough,the only programs of length at most n that terminate on 0 , , . . . , k are the totalones, so our computation process would discover exactly the total programs andreturn a number which exceeds all its values at k . Since T n is a string of length n + O (1), the complexity of the function g is at most n + O (1). The first part isproven.For the second part the upper bound is also easy. Indeed, let T n be the string oflength 2 n +1 that encodes information about all programs of length at most n sayingwhich of them are total. Knowing T n (and therefore knowing n ), we compute g ( k )as the maximal value of all total programs of length at most n for the input k ,plus 1. This function is total, exceeds all total computable functions of complexityat most n (by construction), and has complexity at most 2 n + O (1) .The lower bound is a bit more difficult and it is convenient to use the gameargument. Consider the following full information game with two positive integerparameters a and b . Each of the two players, Alice and Bob, constructs sequencesof natural numbers; Alice constructs a sequences and Bob constructs b sequences.Initially all the sequences are empty. At any moment Alice may extend any ofher sequences by adding one more term (a natural number). Bob may do thesame for his sequences. The game is infinite; in the limit Alice has a sequencesof natural numbers (finite or infinite) and B has b sequences of natural numbers(finite or infinite). The winning player is determined by this limit as follows: Bobwins if one of his infinite sequences exceeds all infinite sequences of Alice . (Finitesequences do not matter.)Since the winner is determined by the limit position, the order of moves do notmatter. One may assume, for example, that Alice and Bob make their moves inturn and each player may skip her/his turn. Indeed, player loses nothing if (s)hepostpones the move. 2ncreasing b , we make the game easier for Bob, and increasing a we make iteasier for Alice. The following lemma solves this game. Lemma. • If b > a , Bob has a winning strategy. • If b < a , Alice has a winning strategy.Proof of the lemma. The first part of the lemma essentially repeats the argumentfor the upper bound, and is not used in the sequel. But the strategy is easy andit is instructive to compare it with the argument given above.Alice constructs a sequences indexed by a labels. For each subset X of labels(on the Alice’s side) we allocate one Bob’s sequence (label). Since b > a , thisis possible. The Bob’s sequence that correspond to a set X is constructed in atrivial way: its k th term is the maximum of k th terms of all Alice’s sequences withlabels in X , plus 1; the length of Bob’s sequence is the minimal length of Alice’ssequences with labels in X . For the limit state, take the set X of all labels ofinfinite Alice’s sequences. The Bob’s sequence that corresponds to X is infiniteand exceeds all of them.The second part of the lemma can be proved by induction. For a = 1 we needto consider the case b = 1. Then Alice’s strategy is to copy Bob’s sequence: hehas to construct an infinite sequence according to the game rules. Alice’s sequencewill be the same, and Bob’s sequence does not exceed it.Now the induction step. Imagine that the second claim is true for a = k − b < k − (equivalently, for b = 2 k − −
1, since decreasing b makes Bob’stask only harder). Consider a winning strategy for Alice in this game. We wantto use it to construct Alice’s strategy for a = k and b < k . This new strategy willconsist of two stages.At the first stage Alice reserves one of k sequences and does not touch it, so itremains empty, and Alice is in the situation where a = k −
1. There is a winningstrategy against Bob with b = 2 k − −
1, and now Bob has more sequences. Stillthe strategy can be use mutatis mutandi up to the moment when Bob has 2 k − or more non-empty sequences. Indeed, the labeling does not matter, so Alice mayimagine that she plays against at most 2 k − − k −
1) in this setting?There are two possibilities. • Bob never has 2 k − or more non-empty sequences. Then Alice applies herstrategy throughout the entire infinite game and wins (according to the in-duction assumption). 3 At some moment Bob creates 2 k − or more non-empty sequences. Then Alicerecalls that she has a reserved sequence that is still empty, and writes a largenumber as the first term of this reserved sequence. The number should begreater than all numbers that appear at the first places of the non-emptyBob sequences.Then Alice extends this reserved sequence to an infinite sequence in an ar-bitrary way. More precisely, since Alice cannot add infinitely many termsin one move, this decision is essentially a commitment to add these termsone by one no matter what. After that, the 2 k non-empty sequences of Bobbecome useless since they do not exceed one of the Alice’s infinite sequences.Starting from this point, Alice ignores these useless Bob’s sequences andplays with the remaining ones. There is less than 2 k − of them, so Alice canuse the induction assumption and win.However, there is a problem: the winning strategy that exists by the induc-tion assumption is for the initial position of the game, and now Alice alreadyhas put some numbers in her k − N be the maximal length of all her sequences in this position. Alice may addarbitrary numbers to her sequences up to length N and then forget aboutfirst N terms in all sequences, and start the game anew. If she wins in thismodified game, she wins in the original game, because the N first termscould only make the Bob’s task harder.This argument finishes the proof of the lemma.It remains to explain how the Lemma is used to prove the theorem. This is astandard reasoning for game arguments in algorithmic information theory (see [2])and goes as follows. Consider the game for b = 2 a − a = 2 n and considera “blind” Bob’s strategy where he uses all programs of length less than a (thereare 2 a − b of them) to generate his sequences (applying each program to 0,1,. . . sequentially). Alice constructs a = 2 n sequences and wins. These 2 n Alice’ssequences are computable, since both Alice and Bob strategies are computable.To compute each of these sequences it is enough to know its ordinal number thatcan be encoded by an n -bit string. Note the this string determines n so we do notneed to provide n separately. Therefore, these 2 n sequences all have complexity n + O (1), and no computable total function of complexity less than a = 2 n exceedsall infinite sequences (=total functions) among them. This finishes the proof ofthe lower bound. 4uthor is grateful to all members of the LIRMM ESCAPE team (Montpel-lier) and MSU/HSE Kolmogorov seminar (Moscow) who asked the question anddiscussed the answer. References [1] C.P. Schnorr, Optimal Enumerations and Optimal Godel Numberings,
Mathe-matical Systems Theory ( Theory of Computing Systems ), (2), 182–191.[2] An.A. Muchnik, I. Mezhirov, A. Shen, N. Vereshchagin, Game interpretationof Kolmogorov complexity , https://arxiv.org/abs/1003.4712https://arxiv.org/abs/1003.4712