Separations of non-monotonic randomness notions
Laurent Bienvenu, Rupert Hoelzl, Thorsten Kraling, Wolfgang Merkle
SSeparations of non-monotonic randomnessnotions (Preliminary version, 7 July 2009)
Laurent Bienvenu, Rupert H¨olzl, Thorsten Kr¨aling, and Wolfgang Merkle
Institut f¨ur Informatik, Ruprecht-Karls-Universit¨at,Heidelberg, Germany
Abstract.
In the theory of algorithmic randomness, several notions ofrandom sequence are defined via a game-theoretic approach, and thenotions that received most attention are perhaps Martin-L¨of randomnessand computable randomness. The latter notion was introduced by Schnorrand is rather natural: an infinite binary sequence is computably randomif no total computable strategy succeeds on it by betting on bits in order.However, computably random sequences can have properties that onemay consider to be incompatible with being random, in particular, thereare computably random sequences that are highly compressible. Theconcept of Martin-L¨of randomness is much better behaved in this andother respects, on the other hand its definition in terms of martingales isconsiderably less natural.Muchnik, elaborating on ideas of Kolmogorov and Loveland, refinedSchnorr’s model by also allowing non-monotonic strategies, i.e. strategiesthat do not bet on bits in order. The subsequent “non-monotonic” notionof randomness, now called Kolmogorov-Loveland-randomness, has beenshown to be quite close to Martin-L¨of randomness, but whether thesetwo classes coincide remains a fundamental open question.In order to get a better understanding of non-monotonic randomness no-tions, Miller and Nies introduced some interesting intermediate concepts,where one only allows non-adaptive strategies, i.e., strategies that can stillbet non-monotonically, but such that the sequence of betting positions isknown in advance (and computable). Recently, these notions were shownby Kastermans and Lempp to differ from Martin-L¨of randomness. Wecontinue the study of the non-monotonic randomness notions introducedby Miller and Nies and obtain results about the Kolmogorov complexitiesof initial segments that may and may not occur for such sequences, wherethese results then imply a complete classification of these randomnessnotions by order of strength.
Random sequences are the central object of study in algorithmic random-ness and have been investigated intensively over the last decade, which a r X i v : . [ c s . CC ] J u l ed to a wealth of interesting results clarifying the relations between thevarious notions of randomness and revealing interesting interactions withnotions such as computational power [2, 5, 11].Intuitively speaking, a binary sequence is random if the bits of the sequencedo not have effectively detectable regularities. This idea can be formalizedin terms of betting strategies, that is, a sequence will be called randomin case the capital gained by successive bets on the bits of the sequenceaccording to a fixed betting strategy must remain bounded, with fairpayoff and a fixed set of admissible betting strategies understood.The notions of random sequences that have received most attention areMartin-L¨of randomness and computable randomness. Here a sequenceis called computably random if no total computable betting strategycan achieve unbounded capital by betting on the bits of the sequencein the natural order, a definition that indeed is natural and suggestsitself. However, computably random sequences may lack certain propertiesassociated with the intuitive understanding of randomness, for examplethere are such sequences that are highly compressible, i.e., show a largeamount of redundancy, see Theorem 4 below. Martin-L¨of randomnessbehaves much better in this and other respects. Indeed, the Martin-L¨of random sequences can be characterized as the sequences that areincompressible in the sense that all their initial segments have essentiallymaximal Kolmogorov complexity, and in fact this holds for several versionsof Kolmogorov complexity according to celebrated results by Schnorr,by Levin and, recently, by Miller and Yu [2]. On the other hand, ithas been held against the concept of Martin-L¨of randomness that itsdefinition involves effective approximations, i.e., a very powerful, hencerather unnatural model of computation, and indeed the usual definition ofMartin-L¨of randomness in terms of left-computable martingales, that is,in terms of betting strategies where the gained capital can be effectivelyapproximated from below, is not very intuitive.It can be shown that Martin-L¨of randomness strictly implies computablerandomness. According to the preceding discussion the latter notion is tooinclusive while the former may be considered unnatural. Ideally, we wouldtherefore like to find a more natural characterization of ML-randomness;or, if that is impossible, we are alternatively interested in a notion that isclose in strength to ML-randomness, but has a more natural definition.One promising way of achieving such a more natural characterization ordefinition could be to use computable betting strategies that are morepowerful than those used to define computable randomness.2uchnik [10] proposed to consider computable betting strategies that arenon-monotonic in the sense that the bets on the bits need not be done in thenatural order, but such that the bit to bet on next can be computed fromthe already scanned bits. The corresponding notion of randomness is calledKolmogorov-Loveland randomness because Kolmogorov and Lovelandindependently had proposed concepts of randomness defined via non-monotonic selecting of bits.Kolmogorov-Loveland randomness is implied by and in fact is quite closeto Martin-L¨of randomness, see Theorem 15 below, but whether the twonotions are distinct is one of the major open problems of algorithmicrandomness. In order to get a better understanding of this open problemand of non-monotonic randomness in general, Miller and Nies [9] intro-duced restricted variants of Kolmogorov-Loveland randomness, where thesequence of betting positions must be non-adaptive, i.e., can be computedin advance without knowing the sequence on which one bets.The randomness notions mentioned so far are determined by two pa-rameters that correspond to the columns and rows, respectively, of thetable in Figure 1. First, the sequence of places that are scanned and onwhich bets may be placed, while always being given effectively, can justbe monotonic, can be equal to π (0) , π (1) , . . . for a permutation or aninjection π from N to N , or can be adaptive, i.e., the next bit dependson the bits already scanned. Second, once the sequence of scanned bits isdetermined, betting on these bits can be according to a betting strategywhere the corresponding martingale is total or partial computable, or isleft-computable. The known inclusions between the corresponding classesof random sequences are shown in Figure 1, see Section 2 for technicaldetails and for the definitions of the class acronyms that occur in thefigure. monotonic permutation injection adaptivetotal TMR = TPR ⊇ TIR ⊇ KLR ⊆ ⊆ ⊆ = partial PMR ⊇ PPR ⊇ PIR ⊇ KLR ⊆ ⊆ ⊆ ⊆ left-computable
MLR = MLR = MLR = MLRFig. 1.
Known class inclusions k enumerate an open cover of measure at most 1 /k for allthe sequences on which some universal martingale exceeds k . Furthermore,the classes in the first and second row of the last column coincide with theclass of Kolmogorov-Loveland random sequences, because it can be shownthat total and partial adaptive betting strategies yield the same conceptof random sequence [6]. Finally, it follows easily from results of Buhrmanet al. [1] that the class TMR of computably random sequences coincideswith the class
TPR of sequences that are random with respect to totalpermutation martingales, i.e., the ability to scan the bits of a sequenceaccording to a computable permutation does not increase the power oftotal martingales.Concerning non-inclusions, it is well-known that it holds that
KLR (cid:40)
PMR (cid:40)
TMR . Furthermore, Kastermans and Lempp [3] have recently shown that theMartin-L¨of random sequences form a proper subclass of the class
PIR ofpartial injective random sequences, i.e.,
MLR (cid:40)
PIR .Apart from trivial consequences of the definitions and the results justmentioned, nothing has been known about the relations of the randomnessnotions between computable randomness and Martin-L¨of randomnessin Figure 1. In what follows, we investigate the six randomness notionsthat are shown in Figure 1 in the range between
PIR and
TMR , i.e.,between partial injective randomness as introduced below and computablerandomness. We obtain a complete picture of the inclusion structure ofthese notions, more precisely we show that the notions are mutually distinctand indeed are mutually incomparable with respect to set theoreticalinclusion, except for the inclusion relations that follow trivially by definitionand by the known relation
TMR ⊆ TPR , see Figure 2 at the end of thispaper. Interestingly these separation results are obtained by investigatingthe possible values of the Kolomogorov complexity of initial segments ofrandom sequences for the different strategy types, and for some randomnessnotions we obtain essentially sharp bounds on how low these complexitiescan be. 4 otation.
We conclude the introduction by fixing some notation. The setof finite strings (or finite binary sequences, or words) is denoted by 2 <ω , (cid:15) being the empty word. We denote the set of infinite binary sequences by2 ω . Given two finite strings w, w (cid:48) , we write w (cid:118) w (cid:48) if w is a prefix of w (cid:48) .Given an element x of 2 ω or 2 <ω , x ( i ) denotes the i -th bit of x (where byconvention there is a 0-th bit and x ( i ) is undefined if x is a word of lengthless than i + 1). If A ∈ ω and X = { x < x < x < . . . } is a subset of N then A (cid:22) X is the finite or infinite binary sequence A ( x ) A ( x ) . . . . Weabbreviate A (cid:22) { , . . . , n − } by A (cid:22) n (i.e., the prefix of A of length n ).C and K denote plain and prefix-free Kolmogorov complexity, respec-tively [2, 5]. The function log designates the logarithm of base 2. An order is a function h : N → N that is non-decreasing and tends to infinity. We now review the concept of martingale and betting strategy that arecentral for the unpredictability approach to define notions of an infiniterandom sequence.
Definition 1. A martingale is a nonnegative, possibly partial, function d : 2 <ω → Q such that for all w ∈ <ω , if d ( w is defined if and onlyif d ( w is, and if these are defined, then so is d ( w ) , and the relation d ( w ) = d ( w
0) + d ( w holds. A martingale succeeds on a sequence A ∈ ω if d ( A (cid:22) n ) is defined for all n , and lim sup d ( A (cid:22) n ) = + ∞ .We denote by Succ( d ) the success set of d , i.e., the set of sequences onwhich d succeeds. Intuitively, a martingale represents the capital of a player who bets on thebits of a sequence A ∈ ω in order, where at every round she bets someamount of money on the value of the next bit of A . If her guess is correct,she doubles her stake. If not, she loses her stake. The quantity d ( w ), with w a string of length n , represents the capital of the player before the n -thround of the game (by convention there is a 0-th round) when the first n bits revealed so far are those of w .We say that a sequence A is computably random if no total computablemartingale succeeds on it. One can extend this in a natural way to partialcomputable martingales: a sequence A is partial computably random if no partial martingale succeeds on it. No matter whether we considerpartial or total computable martingales, this game model can be seen as5oo restrictive by the discussion in the introduction. Indeed, one couldallow the player to bet on bits in any order she likes (as long as shecan visit each bit at most once). This leads us to extend the notion ofmartingale to the notion of strategy. Definition 2. A betting strategy is a pair b = ( d, σ ) where d is amartingale and σ : 2 <ω → N is a function. For a strategy b = ( d, σ ), the term σ is called the scan rule . For a string w , σ ( w ) represents the position of the next bit to be visited if the player hasread the sequence of bits w during the previous moves. And as before, d specifies how much money is bet at each move. Formally, given an A ∈ ω ,we define by induction a sequence of positions n , n , . . . by (cid:26) n = σ ( (cid:15) ) ,n k +1 = σ ( A ( n ) A ( n ) . . . A ( n k )) for all k ≥ b = ( d, σ ) succeeds on A if the n i are all defined andpairwise distinct (i.e., no bit is visited twice) andlim sup k → + ∞ d ( A ( n ) . . . A ( n k )) = + ∞ Here again, a betting strategy b = ( d, σ ) can be total or partial. In fact, itspartiality can be due either to the partiality of d or to the partiality of σ .We say that a sequence is Kolmogorov-Loveland random if no totalcomputable betting strategy succeeds on it. As noted in [8], the concept ofKolmogorov-Loveland randomness remains the same if one replaces “totalcomputable” by “partial computable” in the definition.Kolmogorov-Loveland randomness is implied by Martin-L¨of randomnessand whether the two notions can be separated is one of the most importantopen problems on algorithmic randomness. As we discussed above, Millerand Nies [9] proposed to look at intermediate notions of randomness,where the power of non-monotonic betting strategies is limited. In thedefinition of a betting strategy, the scan rule is adaptive, i.e., the positionof the next visited bit depends on the bits previously seen. It is interestingto look at non-adaptive games.
Definition 3.
In the above definition of a strategy, when σ ( w ) only de-pends on the length of w for all w (i.e., the decision of which bit should bechosen at each move is independent of the values of the bits seen in previ-ous moves), we identify σ with the (injective) function π : N → N , where or all n π ( n ) is the value of σ on words of length n ( π ( n ) indicates theposition of the bit visited during the n -th move), and we say that b = ( d, π ) is an injection strategy . If moreover π is bijective, we say that b is a permutation strategy . If π is the identity, the strategy b = ( d, π ) is saidto be monotonic , and can clearly be identified with the martingale d . All this gives a number of possible non-adaptive, non-monotonic, random-ness notions: one can consider either monotonic, permutation, or injectionstrategies, and either total computable or partial computable ones. Thisgives a total of six randomness classes, which we denote by
TMR , TPR , TIR , PMR , PPR , and PIR , (1)where the first letter indicates whether we consider total (T) or partial (P)strategies, and the second indicates whether we look at monotonic (M),permutation (P) or injection (I) strategies. For example, the class TMR is the class of computably random sequences, while the class
PIR is theclass of sequences A such that no partial injection strategy succeeds on A .Recall in this connection that the known inclusions between the six classesin (1) and the classes KLR and
MLR of Kolmogorov-Loveland randomand Martin-L¨of random sequences have been shown in Figure 1 above.
We begin our study by the randomness notions arising from the gamemodel where strategies are total computable. As we will see, in thismodel, it is possible to construct sequences that are random and yet havevery low Kolmogorov complexity (i.e. all their initial segments are of lowKolmogorov complexity). We will see in the next section that this is nolonger the case when we allow partial computable strategies in the model.
The following theorem is a first illustration of the phenomenon we justdescribed.
Theorem 4 (Lathrop and Lutz [4], Muchnik [10]).
For every com-putable order h , there is a sequence A ∈ TMR such that, for all n ∈ N , C ( A (cid:22) n | n ) ≤ h ( n ) + O(1) . roof (Idea). Defeating one total computable martingale is easy and canbe done computably, i.e., for every total computable martingale d thereexists a sequence A , uniformly computable in d , such that A / ∈ Succ( d ).Indeed, given a martingale d . For any given w , one has either d ( w ≤ d ( w )or d ( w ≤ d ( w ). Thus, one can easily construct a computable sequence A by setting A (cid:22) (cid:15) and by induction, having defined A (cid:22) n , we choose A (cid:22) n + 1 = ( A (cid:22) n ) i where i ∈ { , } is such that d (( A (cid:22) n ) i ) ≤ d ( A (cid:22) n ).This can of course be done computably since d is total computable, andby construction of A , d ( A (cid:22) n ) is non-increasing, meaning in particularthat d does not succeed against A .Defeating a finite number of total computable martingales is equallyeasy. Indeed, given a finite number d , . . . , d k of such martingales, theirsum D = d + . . . + d k is itself a total computable martingale (this followsdirectly from the definition). Thus, we can construct as above a com-putable sequence A that defeats D . And since D ≥ d i for all 1 ≤ i ≤ k ,this implies that A defeats all the d i . Note that this argument would workjust as well if we had taken D to be any weighted sum α d + . . . + α k d k ,with positive rational constants α i .We now need to deal with the general case where we have to defeat all total computable martingales simultaneously. We will again proceed usinga diagonalization technique. Of course, this diagonalization cannot becarried out effectively, since there are infinitely many such martingalesand since we do not even know whether any one given partial computablemartingale is total. The first problem can easily be overcome by introducingthe martingales to diagonalize against one by one instead of all at thebeginning. So at first, for a number of stages we will only take intoaccount the first computable martingale d . Then (maybe after a longtime) we may introduce the second martingale d , with a small coefficient α (to ensure that introducing d does not cost us too much) and thenconsider the martingale d + α d . Much later we can introduce the thirdmartingale d with an even smaller coefficient α , and diagonalize against d + α d + α d , and so on. So in each step of the construction we haveto consider just a finite number of martingales.The non-effectivity of the construction arises from the second problem,deciding which of our partial computable martingales are total. However,once we are supplied with this additional information, we can effectivelycarry out the construction of A . And since for each step we need to8onsider only finitely many potentially total martingales, the informationwe need to construct the first n bits of A for some fixed n is finite,too. Say, for example, that for the first n stages of the construction –i.e., to define A (cid:22) n – we decided on only considering k martingales d , . . . , d k . Then we need no more than k bits, carrying the informationwhich martingales among d , . . . , d k are total, to describe A (cid:22) n . That way,we get C ( A (cid:22) n | n ) ≤ k + O (1).As can be seen from the above example, the complexity of descriptions ofprefixes of A depends on how fast we introduce the martingales. This iswhere our orders come into play. Fix a fast-growing computable function f with f (0) = 0, to be specified later. We will introduce a new martingaleat every position of type f ( k ), that is, between positions [ f ( k ) , f ( k + 1)),we will only diagonalize against k + 1 martingales, hence by the abovediscussion, for every n ∈ [ f ( k ) , f ( k + 1)), we haveC ( A (cid:22) n | n ) ≤ k + O (1)Thus, if the function f grows faster than the inverse function h − of agiven order h , we get C ( A (cid:22) n | n ) ≤ h ( n ) + O (1)for all n . (cid:117)(cid:116) It turns out that, perhaps surprisingly, the classes
TMR and
TPR coincide. This fact was stated explicitely in Merkle et al [8], but is easilyderived from the ideas introduced in Buhrman et al [1]. We present themain ideas of their proof as we will later need them. We shall prove:
Theorem 5.
Let b = ( d, π ) be a total computable permutation strategy.There exists a total computable martingale d such that Succ( b ) ⊆ Succ( d ) . This theorem states that total permutation strategies are no more powerfulthan total monotonic strategies, which obviously entails
TMR = TPR .Before we can prove it, we first need a definition.
Definition 6.
Let b = ( d, π ) be a total injective strategy. Let w ∈ <ω .We can run the strategy b on w as if it were an element of ω , stoppingthe game when b asks to bet on a bit of position outside w . This game s of course finite (for a given w ) since at most | w | bets can be made.We define ˆ b ( w ) to be the capital of b at the end of this game. Formally: ˆ b ( w ) = d (cid:0) w π (0) . . . w π ( N − (cid:1) where N is the smallest integer such that π ( N ) ≥ | w | . Note that if b = ( d, π ) is a total computable injection martingale, ˆ b is totalcomputable. If ˆ b was itself a monotonic martingale, Theorem 5 would beproven. This is however not the case in general: suppose d ( (cid:15) ) = 1, d (0) = 2, d (1) = 0, and π (0) = 1, π (1) = 5 (i.e., d first visits the bit in position 1,betting everyrhing on the value 0, then visits the bit in position 5). Wethen have b (0) = 1 and b (1) = 1, but ˆ b (00) = 2, ˆ b (01) = 2, ˆ b (10) = 0 andˆ b (11) = 0, which shows that ˆ b is not a martingale.The trick is, given a betting strategy b and a word w , to look at the expectedvalue of b on w , i.e., look at the mathematical expectation of b ( w (cid:48) ) forlarge enough extensions w (cid:48) of w . Specifically, given a total betting strategy b = ( d, π ) and a word w of length n , we take an integer M large enoughto have π ([0 , . . . , M − ∩ [0 , . . . , n −
1] = π ( N ) ∩ [0 , . . . , n − b will never bet on a bit of position less than n after the M -th move), and define:Av b ( w ) = 12 M (cid:88) w (cid:118) w (cid:48) | w (cid:48) | = M ˆ b ( w (cid:48) ) Proposition 7 (Buhrman et al [1], Kastermans-Lempp [3]). (i) The quantity Av b ( w ) (defined above) is well-defined i.e. does not dependon M as long as it satisfies the required condition.(ii) For a total injective strategy b , Av b is a martingale.(iii) For a given injective strategy b and a given word w of length n , Av b ( w ) can be computed if we know the set π ( N ) ∩ [0 , . . . , n − . In particular, if b is a total computable permutation strategy, then Av b is total computable. As Buhrman et al. [1] explained, it is not true in general that if a totalcomputable injective strategy b succeeds against a sequence A , then Av b also succeeds on A . However, this can be dealt with using the well-known“saving trick”. Suppose we are given a martingale d with initial capital,say, 1. Consider the variant d (cid:48) of d that does the following: when run on a10iven sequence A , d (cid:48) initially plays exactly as d . If at some stage of thegame d (cid:48) reaches a capital of 2 or more, it then puts half of its capital ona “bank account”, which will never be used again. From that point on, d (cid:48) bets half of what d does, i.e. start behaving like d/ d (cid:48) startsbehaving like d/
4, and so on.For every martingale d (cid:48) that behaves as above (i.e. saves half of its capitalas soon as it exceeds twice its starting capital), we say that d (cid:48) has the“saving property”. It is clear from the definition that if d is computable,then so is d (cid:48) , and moreover d (cid:48) can be uniformly computed given an indexfor d . Moreover, if for some sequence A one haslim sup n → + ∞ d ( A (cid:22) n ) = + ∞ then lim n → + ∞ d (cid:48) ( A (cid:22) n ) = + ∞ which in particular implies Succ( d ) ⊆ Succ( d (cid:48) ) (it is easy to see that it isin fact an equality). Thus, whenever one considers a martingale d , onecan assume without loss of generality that it has the saving property (aslong as we are only interested in the success set of martingales, not inthe growth rate of their capital). The key property (for our purposes) ofsaving martingales is the following. Lemma 8.
Let b = ( d, π ) be a total injective strategy such that d has thesaving property. Let d (cid:48) = Av b . Then Succ( b ) ⊆ Succ( d (cid:48) ) .Proof. Suppose that b = ( d, π ) succeeds on a sequence A . Since d has thesaving property, for arbitrarily large k there exists a finite prefix A (cid:22) n of A such that a capital of at least k is saved during the finite game of b against A . We then have ˆ b ( w (cid:48) ) ≥ k for all extensions w (cid:48) of A (cid:22) n (as a savedcapital is never used), which by definition of Av b implies Av b ( A (cid:22) m ) ≥ k for all m ≥ n . Since k can be chosen arbitrarily large, this finishes theproof. (cid:117)(cid:116) Now the proof of Theorem 5 is as follows. Let b = ( d, π ) be a totalcomputable permutation strategy. By the above discussion, let d (cid:48) be thesaving version of d , so that Succ( d ) ⊆ Succ( d (cid:48) ). Setting b (cid:48) = ( d (cid:48) , π ), we11ave Succ( b ) ⊆ Succ( b (cid:48) ). By Proposition 7 and Lemma 8, d (cid:48)(cid:48) = Av b (cid:48) is atotal computable martingale, andSucc( b ) ⊆ Succ( b (cid:48) ) ⊆ Succ( d (cid:48)(cid:48) )as wanted. While the class of computably random sequence (i.e. the class
TMR ) isclosed under computable permutations of the bits, we now see that thisresult does not extend to computable injections. To wit, the followingtheorem is true.
Theorem 9.
Let A ∈ ω . Let { n k } k ∈ N be a computable sequence of inte-gers such that n k +1 ≥ n k for all k . Suppose that A is such that: C ( A (cid:22) n k | k ) ≤ log( n k ) − n k )) for infinitely many k . Then A / ∈ TIR .Proof.
Let A be a sequence satisfying the hypothesis of the theorem.Assuming, without loss of generality, that n = 0, we partition N into anincreasing sequence of intervals I , I , I , . . . where I k = [ n k , n k +1 ). Noticethat we have for all k :C ( A (cid:22) I k | k ) ≤ C ( A (cid:22) n k +1 | k + 1) + O (1)By the hypothesis of the theorem, the right-hand side of the above in-equality is bounded by log( n k +1 ) − n k +1 )) for infinitely many k .Additionally, we have | I k | = n k +1 − n k which by hypothesis on the sequence n k implies | I k | ≥ n k +1 /
2, and hence log( | I k | ) = log( n k +1 ) + O (1) andlog(log( | I k | )) = log(log( n k +1 )) + O (1). It follows thatC ( A (cid:22) I k | k ) ≤ log( | I k | ) − | I k | )) − O (1)for infinitely many k , henceC ( A (cid:22) I k | k ) ≤ log( | I k | ) − | I k | ))for infinitely many k . 12et us call S k the set of strings w of length | I k | such that C ( w | | I k | ) ≤ log( | I k | ) − | I k | )) (to which A (cid:22) I k belongs for infinitely many k ).By the standard counting argument, there are at most s k = 2 log( | I k | ) − | I k | )) = | I k | log ( | I k | )strings in S k . For every k , we split I k into s k consecutive disjoint intervalsof equal length: I k = J k ∪ J k ∪ . . . ∪ J s k − k N (cid:39)(cid:38) (cid:37)(cid:36)(cid:39) (cid:36)(cid:39) (cid:36)(cid:39) (cid:8)(cid:8)(cid:8)(cid:72)(cid:72)(cid:72) I k +1 J k +1 J s k − k J k J k I k (cid:38) We design a betting strategy as follows. We start with a capital of 2. Wethen reserve for each k an amount 1 / ( k + 1) to be bet on the bits inpositions in I k (this way, the total amount we distribute is smaller than 2),and we split this evenly between the J ik , i.e. we reserve an amount s k · ( k +1) for every J ik . We then enumerate the sets S k in parallel. Whenever the e -thelement w ek of some S k is enumerated, we see w ek as a possible candidateto be equal to A (cid:22) I k , and we bet the reserved amount s k · ( k +1) on thefact that A (cid:22) I k coincides with w ek on the bits whose position is in J ek . Ifwe are successful (this in particular happens whenever w ek = A (cid:22) I k ), ourreserved capital for this J ek is multiplied by 2 | J ek | , i.e. we now have for this J ek , a capital of 1 s k · ( k + 1) · ( | I k | /s k ) Replacing s k by its value (and remembering that | I k | ≥ k − O (1) ), anelementary calculation shows that this quantity is greater than 1 foralmost all k . Thus, our betting strategy succeeds on A . Indeed, for infinitelymany k , A (cid:22) I k is an element of S k , hence for some e we will be successfulin the above sub-strategy, making an amount of money greater than 1 forinfinitely many k , hence our capital tends to infinity throughout the game.Finally, it is easy to see that this betting strategy is total: it simply is asuccession of doubling strategies on an infinite c.e. set of words, and it isinjective as the J ek form a partition of N , and the order of the bits we beton is independent of A (in fact, we see our betting strategy succeeds on all sequences α satisfying the hypothesis of the theorem). (cid:117)(cid:116)
13s an immediate corollary, we get the following.
Corollary 10.
If for a sequence A we have for all n C ( A (cid:22) n | n ) < log n − n + O(1) , then A (cid:54)∈ TIR . Another interesting corollary of our construction is that the class of allcomputable sequences can be covered by a single total computable injectivestrategy.
Corollary 11.
There exists a single total computable injective strategywhich succeeds against all computable elements of ω .Proof. This is because, as we explained above, the strategy we constructin the proof of Theorem 9 succeeds against every sequence A such thatC ( A (cid:22) n k | k ) ≤ log( n k ) − n k )) for infinitely many k . This inparticular includes all computable sequences A , for which C ( A (cid:22) n k | k ) = O (1). (cid:117)(cid:116) The lower bound on Kolmogorov complexity given in Theorem 9 is quitetight, as witnessed by the following theorem.
Theorem 12.
For every computable order h there is a sequence A ∈ TIR such that C( A (cid:22) n | n ) ≤ log( n ) + h ( n ) + O(1) . In particular, we have C( A (cid:22) n ) ≤ n ) + h ( n ) + O(1) .Proof. The proof is a modification of the proof of Theorem 4. This time, wewant to diagonalize against all non-monotonic total computable injectivebetting strategies. Like in the proof of Theorem 4, we add them one by one,discarding the partial strategies. However, to achieve the construction of A by diagonalization, we will diagonalize against the average martingales ofthe strategies we consider. As explained on page 11, we can assume thatall total computable injective strategies have the saving property, hencedefeating Av b is enough to defeat b (by Lemma 8). The proof thus goesas follows:Fix a fast growing computable function f , to be specified later. We startwith a martingale D = 1 (the constant martingale equal to 1) and w = (cid:15) .For all k we do the following. Assume we have constructed a prefix w k of A of length f ( k ), and that we are currently diagonalizing against amartingale D k , so that D k ( w k ) <
2. We then enumerate a new partialcomputable injective betting strategy b . If it is not total, we memorizethis fact using one extra bit of information, and we set D k +1 = D k .14therwise, we set d k +1 = Av b and compute a positive rational α k +1 suchthat ( D k + α k +1 d k +1 )( w k ) <
2, and finally set D k +1 = D k + α k +1 d k +1 .Then, we define w k +1 to be the extension of w k of length f ( k + 1)by the usual diagonalization against D k +1 , maintaining the inequality D k +1 ( u ) < u of w k +1 . The infinite sequence A obtainedthis way defeats all the average martingales of all total computable injec-tive strategies, hence by Lemma 8, A ∈ TIR .It remains to show that A has low Kolmogorov complexity. Suppose wewant to describe A (cid:22) n for some n ∈ [ f ( k ) , f ( k + 1)). This can be done bygiving n , the subset of { , . . . , k } (of complexity k + O (1)) correspondingto the indices of the total computable injective strategies among the first k partial computable ones, and by giving the restriction of D k +1 to wordsof length at most n . From all this, A (cid:22) n can be reconstructed followingthe above construction. It remains to evaluate the complexity of therestriction of D k +1 to words of length at most n . We already know the totalcomputable injective strategies b , . . . , b k that are being considered in thedefinition of D k +1 . For all i , let π i be the injection associated to b i . We needto compute, for all 0 ≤ i ≤ k , the martingale d i = Av b i on words of lengthat most n . By Proposition 7, this can be done knowing π i ( N ) ∩ [0 , . . . , n − ≤ i ≤ k . But if the π i are known, this set is uniformly c.e. in i, n .Hence, we can enumerate all the sets π i ( N ) ∩ [0 , . . . , n −
1] (for 0 ≤ i ≤ k )in parallel, and simply give the last couple ( i, l ) such that l is enumeratedin π i ( N ) ∩ [0 , . . . , n − ≤ i ≤ k and 0 ≤ l < n , this costs anamount of information O(log k ) + log n . To sum up, we getC ( A (cid:22) n | n ) ≤ k + O(log k ) + log n Thus, it suffices to take f growing fast enough to ensure that the term ≤ k + O(log k ) is smaller than h ( n ) + O(1). (cid:117)(cid:116) We now turn our attention to the second line of Figure 1, i.e., to those ran-domness notions that are based on partial computable betting strategies.15 .1 The class PMR: partial computable martingales arestronger than total ones
We have seen in the previous section that some sequences in
TIR (anda fortiori
TPR and
TMR ) may be of very low complexity, namely loga-rithmic. This is not the case anymore when one allows partial computablestrategies, even monotonic ones.
Theorem 13 (Merkle [7]). If C( A (cid:22) n ) = O(log n ) then A (cid:54)∈ PMR . However, the next theorem, proven by An. A. Muchnik, shows that allowingslightly super-logarithmic growth of the Kolmogorov complexity is enoughto construct a sequence in
PMR . Theorem 14 (Muchnik et al. [10]).
For every computable order h there is a sequence A ∈ PMR such that, for all n ∈ N , C ( A (cid:22) n | n ) ≤ h ( n ) log( n ) + O(1) . Proof.
The proof is almost identical to the proof of Theorem 4. The onlydifference is that we insert all partial computable martingales one byone, and diagonalize against their weighted sum as before. It may happenhowever, that at some stage of the construction, one of the martingalesbecomes undefined. All we need to do then is to memorize this, and ignorethis particular martingale from that point on. Call A the sequence weobtain by this construction. We want to describe A (cid:22) n . To do so, weneed to specify n , and, out of the k partial computable martingales thatare inserted before stage n , which ones have diverged, and at what stage,hence an information of O( k log n ) (giving the position where a particularmartingale diverges costs O(log n ) bits, and there are k martingales. Sincewe can insert martingales as slowly as we like (following some computableorder), the complexity of A (cid:22) n given n can be taken to be smallerthan h ( n ) log n + O (1) (where h is a computable order, fixed before theconstruction of A ). (cid:117)(cid:116) In the case of total strategies, allowing permutation gives no real additionalpower, as
TMR = TPR . Very surprisingly, Muchnik showed that in thecase of partial computable strategies, permutation strategies are a realimprovement over monotonic ones. To wit, the following theorem (quite acontrast to Theorem 14!). 16 heorem 15 (Muchnik [10]).
If there is a computable order h suchthat for all n we have K( A (cid:22) n ) ≤ n − h ( n ) − O(1) , then A (cid:54)∈ PPR . Note that the proof used by Muchnik in [10] works if we replace K by Cin the above statement.
Theorem 16.
For every computable order h there is a sequence A ∈ PPR , such that there are infinitely many n where C ( A (cid:22) n | n ) < h ( n ) .Furthermore, if we have an infinite computable set S ⊆ N , we can choosethe infinitely many lengths n such that they all are contained in S . Lemma 17.
Let d be a partial computable martingale. Let C be an effec-tively closed subset of ω . Suppose that d is total on every element of C .Then there exists a total computable martingale d (cid:48) such that Succ( d ) ∩ C =Succ( d (cid:48) ) ∩ C .Proof. The idea of the proof is simple: the martingale d (cid:48) will try to mimic d while enumerating the complement U of C . If at some stage a cylinder [ w ]is covered by U , then d will be passive (i.e. defined but constant) on thesequences extending w . As we do not care about the behavior of d (cid:48) on U (as long as it is defined), this will be enough to get the conclusion.Let d, C be as above. We build the martingale d (cid:48) on words by induction.Define d (cid:48) ( (cid:15) ) = d ( (cid:15) ) (here we assume without loss of generality that d ( (cid:15) )is defined, otherwise there is nothing to prove). During the construction,some words will be marked as inactive, on which the martingale will bepassive; initially, there is no inactive word. On active words w , we willhave d ( w ) = d (cid:48) ( w ).Suppose for the sake of the induction that d (cid:48) ( w ) is already defined. If w is marked as inactive, we mark w w d ( w
0) = d ( w
1) = d ( w ). Otherwise, by the induction hypothesis, we have d ( w ) = d (cid:48) ( w ). We then run in parallel the computation of d ( w
0) and d ( w U of C until one of the two above eventshappens:(a) d ( w
0) and d ( w
1) become defined. Then set d (cid:48) ( w
0) = d ( w
0) and d (cid:48) ( w
1) = d ( w w ] gets covered by U . In that case, mark w w d (cid:48) ( w
0) = d (cid:48) ( w
1) = d (cid:48) ( w )17ote that one of these two events must happen: indeed, if d ( w
0) and d ( w
1) are undefined (remember that by the definition of a martingale,Definition 1, that they are either both defined or both undefined), thenthis means that d diverges on any element of [ w ∪ [ w
1] = [ w ]. Hence,by assumption, [ w ] ∩ C = ∅ , i.e. [ w ] ⊆ U . It remains to verify thatSucc( d ) ∩ C = Succ( d (cid:48) ) ∩ C . Let A ∈ C . Since d is total on A by assumption,during the construction of d (cid:48) on A , we will always be in case (a), hence wewill have for all n , d ( A (cid:22) n ) = d (cid:48) ( A (cid:22) n ). The result follows immediately. (cid:117)(cid:116) Corollary 18.
Let b = ( d, π ) be a partial computable permutation strategy(resp. injective strategy). Let C be an effectively closed subset of ω . Supposethat b is total on every element of C . Then there exists a total computablepermutation strategy (resp. injective strategy) b (cid:48) such that Succ( b ) ∩ C =Succ( b (cid:48) ) ∩ C .Proof. This follows from the fact that the image or pre-image of aneffectively closed set under a computable permutation of the bits is itselfa closed set: take b = ( d, π ) and C as above. Let ¯ π be the map induced on2 ω by π , i.e. the map defined for all A ∈ ω by¯ π ( A ) = A ( π (0)) A ( π (1)) A ( π (2)) . . . For any given sequence A ∈ C , b succeeds on A if and only if d succeedson ¯ π ( A ). As ¯ π ( A ) ∈ ¯ π ( C ), and ¯ π ( C ) is an effectively closed set, by , thereexists a total martingale d (cid:48) such that Succ( d ) ∩ ¯ π ( C ) = Succ( d (cid:48) ) ∩ ¯ π ( C ).Thus, d (cid:48) succeeds on ¯ π ( A ), or equivalently, b (cid:48) = ( d (cid:48) , π ) succeeds on A .Thus b (cid:48) is as desired. (cid:117)(cid:116) Proof (of Theorem 16).
Again, this proof is a variant of the proof of The-orem 4: we add strategies one by one, diagonalizing, at each stage, againsta finite weighted sum of total monotonic strategies (i.e. martingales). Ofcourse, not all strategies have this property, but we can reduce to this caseusing the techniques we presented above. Suppose that in the constructionof our sequence A , we have already constructed an initial segment w k ,and that up to this stage we played against a weighted sum of k totalmartingales D k = k (cid:88) i =1 α i d i where the d i are total computable martingales, ensuring that D ( u ) < u of w . Suppose we want to introduce a new strategy b = ( d, π ).18here are three cases:Case 0: the new strategy is not valid, i.e. π is not a permutation. In thiscase, we just add one bit of extra information to record this, and ignore b from now on, i.e. we set w k +1 = w k , d k +1 = 0 (the zero martingale), and D k +1 = D k + d k +1 = D k .Case 1: the strategy b is indeed a partial computable permutation strategy,and there exists an extension w (cid:48) of w such that D k ( u ) < u of w (cid:48) , and b diverges on w (cid:48) . In this case, we simply take w (cid:48) as our newprefix of A , as it both diagonalizes against D , and defeats b (since b diverges on w (cid:48) , it will not win against any possible extension of w (cid:48) ). Wecan thus ignore b from that point on, so we set w k +1 = w (cid:48) , d k +1 = 0 and D k +1 = D k + d k +1 = D k .Case 2: if we are not in one of the two previous cases, this means thatour strategy b = ( d, π ) is a partial computable permutation strategy, andthat b is total on the whole Π class C k = [ w k ] ∩ { X ∈ ω | ∀ n D k ( X (cid:22) n ) < } Thus, by Lemma 18, there exists a total computable permutation strat-egy b (cid:48) such that Succ( b ) ∩ C k = Succ( b (cid:48) ) ∩ C k . And by Theorem 5, thereexists a total computable martingale d (cid:48)(cid:48) such that Succ( b (cid:48) ) ⊆ Succ( d (cid:48)(cid:48) ).Thus, we can replace b by d (cid:48)(cid:48) , and defeating d (cid:48)(cid:48) will be enough to defeat b as long as the sequence we construct is in C k . We thus set d k +1 = d (cid:48)(cid:48) , w k +1 = w k and D k +1 = k +1 (cid:88) i =1 α i d i where α k +1 is sufficiently small to have D k +1 ( w k +1 ) < w (cid:48)(cid:48) of w k +1 , ensuring that D k +1 ( u ) < u of w (cid:48)(cid:48) , taking w (cid:48)(cid:48) long enough to have C ( w (cid:48)(cid:48) | | w (cid:48)(cid:48) | ) ≤ h ( | w (cid:48)(cid:48) | ).We then set w k +1 = w (cid:48)(cid:48) , then add a k + 2-th strategy and so on.Note that since w (cid:48)(cid:48) can be chosen arbitrarily large, if we have fixed acomputable susbet S of N , we can also ensure that | w (cid:48)(cid:48) | belong to S if welike. 19t is clear that the infinite sequence A constructed via this process satisfiesC ( A (cid:22) n | n ) ≤ h ( n )for infinitely many n (and, since Case 2 happens infinitely often, if we fixa given computable set S , we can ensure that infinitely many of such n belong to S ). To see that it belongs to PPR , we notice that since forall k , D k +1 ≥ D k and w k (cid:118) w k +1 , we have C k +1 ⊆ C k and thus A ∈ (cid:84) k C k .Now, given a total computable strategy b = ( d, π ), let k be the stagewhere b was considered, and replaced by the martingale d k . Since byconstruction of A , d k +1 does not win against A and by definition of d k ,Succ( b ) ∩ C k ⊆ Succ( d k ) ∩ C k , it follows that A / ∈ Succ( b ). (cid:117)(cid:116) Now that we have assembled all our tools, we can easily prove the desiredresults.
Theorem 19.
The following statements hold.1.
PPR (cid:54)⊆
TIR TIR (cid:54)⊆
PMR PMR (cid:54)⊆
PPR
From these results it easily follows that in Figure 2 no inclusion holdsexcept those indicated and those implied by transitivity.Proof.
1. Choose a computable sequence { n k } k fulfilling the requirementsof Theorem 9 such that C( k ) ≤ log log n k for all k . The members ofthis set then form a computable set S . Use Theorem 16 to constructa sequence A ∈ PPR such that C( A (cid:22) n | n ) < log log n at infinitelymany places in S . We then have for infinitely many k C( A (cid:22) n k | k ) ≤ C( A (cid:22) n k ) ≤ C( A (cid:22) n k | n k )+2 log log n k ≤ n k , so A cannot be in TIR according to Theorem 9.2. Follows immediately from Theorems 12 and 13.3. Follows immediately from Theorems 14 and 15. (cid:117)(cid:116) onotonic permutation injectiontotal TMR = TPR (cid:41)
TIR (cid:40) (cid:40) (cid:40) partial
PMR (cid:41)
PPR (cid:41)
PIRFig. 2.
Assembled class inclusion results
References
1. Harry Buhrman, Dieter van Melkebeek, Kenneth Regan, D. Sivakumar, and MartinStrauss. A generalization of resource-bounded measure, with application to theBPP vs. EXP problem.
SIAM Journal of Computing , 30(2):576–601, 2000.2. Rod Downey and Denis Hirschfeldt.
Algorithmic Randomness . Springer, to appear.3. Bart Kastermans and Steffen Lempp. Comparing notions of randomness.Manuscript, 2008.4. James Lathrop and Jack Lutz. Recursive computational depth.
Information andComputation , 153(1):139–172, 1999.5. Ming Li and Paul Vit´anyi.
Kolmogorov Complexity and Its Applications . Springer,2008.6. Wolfgang Merkle. The Kolmogorov-Loveland stochastic sequences are not closedunder selecting subsequences.
Journal of Symbolic Logic , 68:1362–1376, 2003.7. Wolfgang Merkle. The complexity of stochastic sequences.
Journal of Computerand System Sciences , 74(3):350–357, 2008.8. Wolfgang Merkle, Joseph S. Miller, Andr´e Nies, Jan Reimann, and Frank Stephan.Kolmogorov-Loveland randomness and stochasticity.
Annals of Pure and AppliedLogic , 138(1-3):183–210, 2006.9. Joseph Miller and Andr´e Nies. Randomness and computability: open questions.
Bulletin of Symbolic Logic , 12(3):390–410, 2006.10. Andrei A. Muchnik, Alexei Semenov, and Vladimir Uspensky. Mathematicalmetaphysics of randomness.
Theoretical Computer Science , 207(2):263–317, 1998.11. Andr´e Nies.
Computability and Randomness . Oxford University Press, 2009.. Oxford University Press, 2009.