aa r X i v : . [ m a t h . L O ] O c t STATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOMSEQUENCES
MATTHEW PANCIA
Abstract.
We study the statistical properties of random numbers under theMartin-L¨of definition of randomness, proving that random numbers obey ana-logues of Strong Law of Large Numbers, the Law of the Iterated Logarithm,and that they are normal. We also show that weakly (1-)random numbers donot share these properties. Introduction
The study of random phenomena is an important and interesting field of math-ematical research. Despite the pervasiveness of such phenomena throughout math-ematics, however, there remains to be a definitive and wholly convincing definitionof precisely what randomness is . This is not for lack of trying - there are severalcompeting definitions of randomness, but the question remains: How do we de-cide which one is the “right” definition? One way to gauge the efficacy of thesedefinitions is to look at the properties of random objects that arise when differentdefinitions are used. We concern ourselves in particular with the statistical behav-ior of random sequences of 0’s and 1’s under different choices of what it means tobe random. We would, of course, expect a proper definition of randomness to yieldthe sort of statistical properties in the digits of random sequences that coincidewith what we normally consider random behavior. That is, the values of a randomsequence should obey the Strong Law of Large Numbers, the Law of the IteratedLogarithm, and all of the other properties that are enjoyed by independent, identi-cally distributed (i.i.d.) random variables acting on most elements of the space ofinfinite sequences of (fair) coin tosses. The aim of this paper is to explore and provethese properties for Martin-L¨of random sequences and to do some basic comparisonwith other definitions of randomness. Similar results can be found in previouslypublished papers such as [1] and [7], but we hope that the reader will find ourexposition to be more unified and accessible.1.1. Acknowledgements.
This paper is the result of work done during the MASSProgram at Penn State University. The author would like to thank Stephen G.Simpson for his guidance and support with both the research and the writing ofthis paper.
Mathematics Subject Classification.
Primary: 03D99; Secondary: 03D32.
Key words and phrases.
Martin-L¨of Randomness, random numbers.This paper is the result of work done during the MASS Program at Penn State University.The author would like to thank Stephen G. Simpson for his guidance and support with both theresearch and the writing of this paper. Preliminaries and Notation
We now set notation and provide backround for the rest of the paper. Readersfamiliar with this material may skip to Section 3, where we prove our main results.2.1.
Computability Theory Basics.
We let 2 N denote the Cantor space , i.e. theset of infinite sequences of 0’s and 1’s or total functions X : N → { , } . We willuse X ( i ) to refer to the ( i + 1)-st value of the sequence X (starting the indexingat 0) or, treating X as a binary expansion of a number in [0 , i + 1)-st digit. S n will generally refer to the sum of the first n digits of X ∈ N . For a subset V of2 N , we will denote its complement 2 N \ V as V .2 < N will be the set of bitstrings , or finite sequences of 0’s and 1’s. We will denotethe empty string by <> . If σ, τ are bitstrings, we denote their concatenation by σ a τ , and the length of σ by | σ | .We denote the bitstring generated by restricting X ∈ N to an initial segment oflength n by X ↾ n . Let N σ be the neighborhood generated by a bitstring σ , definedas { X ∈ N : X ↾ | σ | = σ } . That is, N σ consists of all X that have σ as an initialsegment. We let µ denote the unique Borel measure on 2 N such that µ ( N σ ) = | σ | . Note that 2 N with this probability measure is the same as the space of infinite faircoin tosses, letting 1 denote heads and 0 denote tails.We define ϕ e ( x ) to be the output of the program with G¨odel number e runningwith input x (if it halts).Letting X be an element of 2 N , we define ϕ Xe ( x ) as above, except X is used asan oracle in the computation.The following is a standard result, proven in [4]. Theorem 2.1 (Parameterization Theorem) . Given a 2-place partial recursive func-tion ψ ( w, x ) , we can find a 1-place recursive function h ( w ) such that ϕ h ( w ) ( x ) ≃ ψ ( w, x ) for all w, x . A set A ⊆ N is called Σ if, for some recursive predicate RX ∈ A ≡ ∃ n ∈ N : R ( X, n ) holds . A set A is Π if, for some recursive predicate RX ∈ A ≡ ∀ n ∈ N : R ( X, n ) holds . We will use the standard enumeration of Π and Σ sets in the Cantor space,given by U e = { X ∈ N : ϕ Xe (0) ↑} S e = { X ∈ N : ϕ Xe (0) ↓} respectively, where ↑ means that the expression is undefined and ↓ means that it isdefined.We say a sequence V n of Σ sets is effectively open or uniformly Σ if V n = S f ( n ) ,where f is a total recursive function. A set S is effectively null if S ⊆ ∩ ∞ n =0 V n ,where V n are uniformly Σ and µ ( V n ) ≤ n . TATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOM SEQUENCES 3
Definitions of Randomness.
There are 3 definitions of randomness thatwill be considered in this paper.A point X ∈ N is called weakly random if it does not belong to any Π set ofmeasure 0.A point X ∈ N is said to be random (in the sense of Martin-L¨of) if it does notlie in any effectively null set. Equivalently, the singleton set { X } is not effectivelynull.A point X ∈ N is called strongly random if it does not belong to any Π set ofmeasure 0.The names assigned to these different types of randomness are justified, as thefollowing shows. The result follows from the fact that all null Π sets are effectivelynull and all effectively null sets are contained in Π null sets. Theorem 2.2.
Let X ∈ N . (1) X is random ⇒ X is weakly random. (2) X is strongly random ⇒ X is random. This tells us that (Martin-L¨of) randomness is intermediate between weak ran-domness and strong randomness. As will be shown later, weak randomness isgenuinely weaker than randomness, as there are certain statistical properties thatrandom elements have that weakly random elements do not.Similarly, strong randomness is stronger than randomness, as there are relationsinvolving Turing reducibility that hold for strongly random elements that do nothold for random elements. For example, we have that if a represents the Turingdegree of a strongly random sequence X and 0 ′ represents the Turing degree of thehalting problem, then inf( a, ′ ) is recursive. The same result does not hold withthe hpyothesis that X be random, however.One of the important consequences of the Martin-L¨of definition of randomnessis that a computable-theoretic analogue of the Borel-Cantelli lemma from proba-bility theory holds, giving us a powerful tool for proving statements about randomsequences. A proof is given in [1] (stronger than what will be stated here), cast inthe light of complexity theory, but we present proofs that are more consistent withour measure-theoretic approach to randomness. Lemma 2.1 (Solovay’s Lemma) . Let V , V , . . . be a sequence of uniformly Σ sets,then: (1) If X is random and P ∞ n =1 µ ( V n ) < ∞ , then X lies in only finitely many V n . (2) If X is weakly random, P ∞ n =1 µ ( V n ) diverges, and V n represent mutuallyindependent events, then for each m there exists an N such that for some m ≤ i ≤ N , X ∈ V i . A proof of (1) is contained in, so we will prove (2).
Proof of (2).
Choose an m and consider the set Q m = ∞ [ n = m V n . MATTHEW PANCIA
We have that Q m , being the union of Σ sets, is Σ . By De Morgan’s Laws andthe fact that V n are mutually independent, we have that Q m = ∞ \ n = m V n and µ ( Q m ) = ∞ Y n = m (1 − µ ( V n )) . We then will have, noting the above and the fact that the sum of the measures of V n diverges, µ ( Q m ) = 1 for all m . This is because (as in Lemma 5.11 of [6]), for asequence a i ∈ (0 , ∞ X m = n a m = ∞ if and only if ∞ Y m = n (1 − a m ) = 0 . Letting a i = µ ( V n ) , we see that µ ( Q n ) = 0 and so µ ( Q n ) = 1. For X to be weaklyrandom, it must lie in Q m for all m (the complement of Q m being a Π set ofmeasure 0), and therefore X is in V n for infinitely many n . (cid:3) Results From Probability Theory.
In order to prove some later results,we will need some results from probability theory, taken from [3]. We state themin the case where the probability space in question is 2 N with µ defined as above. E will refer to the expectation of a random variable with respect to µ. Finite sequencesof coin tosses correspond to initial segments of sequences in 2 N .Given X ∈ N with S n = P ni =1 X i , define the reduced number of heads in n tosses by the quantity S ∗ n = S n − n p n . Lemma 2.2.
Let x be fixed and let A be the event that for at least one k with k ≤ nS k − k > x. Then there exists a constant c , independent of x , such that for all nµ ( A ) ≤ c µ (cid:16) S n − n > x (cid:17) . Theorem 2.3. If n → ∞ , x → ∞ and x = o ( √ npq ) , then µ ( S ∗ n > x ) ∼ e − x √ πx . Lemma 2.3 (Hoeffding’s Inequality) . Given a sequence of i.i.d. random variables X i on N that take on values in [ a, b ] almost surely, for their sum S n = X + · · · + X n and ǫ > we have µ (cid:18)(cid:26) ω ∈ N : | S n − E ( S n ) | n ≥ ǫ (cid:27)(cid:19) ≤ (cid:18) − n ǫ ( b − a ) (cid:19) We also state a version of Hoeffding’s Inequality to be used specifically for theproof of Theorem 3.2.
TATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOM SEQUENCES 5
Lemma 2.4 (Hoeffding’s Inequality - Fair Coin Case) . For X ∈ N , , all ǫ > and S n = P n − i =0 X ( i ) , we have that µ (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) S n n − (cid:12)(cid:12)(cid:12)(cid:12) > ǫ (cid:19) < − nǫ ) . Statistical Properties
The Strong Law of Large Numbers.
The Strong Law of Large Numbers(SLLN) is a basic result about i.i.d. random variables that describes the deviationof their average values from their common mean. We state it as applied to theCantor space for the standard example where we take the random variables to bethe values of a sequence.
Theorem 3.1 (Strong Law of Large Numbers: Standard Form) . Choose an X ∈ N , then for the partial sums S n = P ni =1 X i we have µ (cid:20) S n n → (cid:21) = 1 . We would, of course, hope that a random element of 2 N would correspond to theubiquitous set of measure 1 on which i.i.d. random variables obey statistical lawssuch as the SLLN. Using the Martin-L¨of definition, this is the case, as the followingshows. Theorem 3.2 (Strong Law of Large Numbers: Computable Form) . Suppose that X ∈ N is random. Then the values of X obey the Strong Law of Large Numbers.That is, for the sum S n S n n → . Proof.
Suppose that X does not obey the SLLN. Then we have that ∃ m ∀ N ∃ n such that (cid:12)(cid:12)(cid:12)(cid:12) P X ( i ) n − (cid:12)(cid:12)(cid:12)(cid:12) > m . Consider the set V m , which X will belong to: V m = (cid:26) ω ∈ N : ∀ N ∃ ( n > N ) (cid:12)(cid:12)(cid:12)(cid:12) P ω ( i ) n − (cid:12)(cid:12)(cid:12)(cid:12) > m (cid:27) . Note that V m = T ∞ N =0 V m,N = T ∞ N =0 S ∞ K>N Q K , where V m,N = (cid:26) ω ∈ N : ∃ ( K > N ) (cid:12)(cid:12)(cid:12)(cid:12) P ω ( i ) n − (cid:12)(cid:12)(cid:12)(cid:12) > m (cid:27) = ∞ [ K>N (cid:26) ω ∈ N : (cid:12)(cid:12)(cid:12)(cid:12) P ω ( i ) K − (cid:12)(cid:12)(cid:12)(cid:12) > m (cid:27) = ∞ [ K>N Q K We have that V m,N is a uniform sequence of Σ sets, as V m,N is the domain of thepartial recursive functional P ( x, N, k ) ≃ the least n > N : (cid:12)(cid:12)(cid:12)(cid:12) P ω ( i ) n − (cid:12)(cid:12)(cid:12)(cid:12) > m . MATTHEW PANCIA
By the Parameterization Theorem (Theorem 2.1), there is a total recursive function h ( N ) such that P ( ω, N, k ) = ϕ (1) ,ωh ( N ) ( k ) = ϕ (1) ,ωh ( N ) (0) , so V m,N = U h ( N ) , and V m,N is uniform.By Hoeffding’s Inequality (Lemma 2.4)) and σ − subadditivity of µ , we have that: µ ( V m,N ) ≤ ∞ X K = N µ ( Q K ) < ∞ X K = N e − Km = 2 e − Nm (1 + e − m + e − m + · · · )= 2 e − Nm (cid:18) − e − m (cid:19) Because | e − m | <
1, this series will converge, and we have an upper bound for µ ( V m,N ). To show that V m is an effectively null set, we choose a subsequence of V m,N , V m,N,k such that µ ( V m,N k ) ≤ k . This can be done in a computable fashion,as we have a computable upper bound on µ ( V m,N ) as a function of N .We then have that V m = ∩ ∞ k =0 V m,N k is effectively null. Recall that X wassupposed to be random, and as such cannot lie in V m . This is a contradiction, andso X cannot belong in V m for any m and therefore must obey the SLLN. (cid:3) Normality of Random Elements.
One property that we would like randomelements of 2 N to have is that their digits should be normal (as binary numbers).What this means is that, given a finite sequence of 0’s and 1’s, the likelihood of thissequence appearing in the values of the random element should be consistent witha uniform distribution. The notion of normality was introduced in a 1909 paper byBorel [2], where he proved that almost all numbers were normal .To state the definition precisely, we say that X ∈ N is normal if for all σ ∈ l , thelimit as n → ∞ of the fraction of i ’s less than n such that h X ( i ) , X ( i + 1) , . . . , X ( i + l − i = σ is l . That is, the probability of an arbitrary bitstring of length l appearing in its values has limiting probability l .Note that this is a stronger statement than the Strong Law of Large Numbers,which is the special case in which we only consider bitstrings of length 1 for aparticular sequence of i.i.d’s. The following theorem establishes that Martin-L¨ofrandom elements of 2 N share this property. Theorem 3.3.
Suppose that X ∈ N is random, then the values of X are normal.Proof. Letting σ ∈ k , define functions M ( k ) i : 2 N → R as follows: M ( k ) i ( ω ) = ( σ occurs starting at ω ( i )0 otherwiseConsidering these functions as random variables on the Cantor space, we will havethat M ( k ) ki will be independent and identically distributed for i = 0 , , . . . . We alsohave that E ( M ( k ) ki ) = k for any choice of σ .The M ( k ) ki we have defined are i.i.d.’s with common expectation k and take onvalues in [0 , An explanation of normal numbers, as well as a proof of Borel’s result, can be found in thearticle [5]
TATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOM SEQUENCES 7 µ ( ω ∈ N : (cid:12)(cid:12)(cid:12)(cid:12) P n − i =0 M ( k ) ki ( ω ) n − k (cid:12)(cid:12)(cid:12)(cid:12) ≥ ǫ )! ≤ − nǫ )Choose ǫ > ∈ Q and consider the sets V ( k ) n , defined as follows: V ( k ) n = ( ω ∈ N : (cid:12)(cid:12)(cid:12)(cid:12) P n − i =0 M ( k ) ki ( ω ) n − k (cid:12)(cid:12)(cid:12)(cid:12) > ǫ ) By above, we have that µ ( V ( k ) n ) ≤ − nǫ ), and therefore0 ≤ ∞ X n =1 µ ( V ( k ) n ) ≤ ∞ X n =1 − nǫ ) < ∞ We also have that V ( k ) n are uniformly Σ (by a similar argument as in the proof ofthe SLLN), so by Solovay’s Lemma, we have that a random point X can lie in onlyfinitely many of the V ( k ) n . As ǫ was arbitrary (in Q ), we then have that ∀ ǫ > ∈ Q ∃ N ∀ n ≥ N : (cid:12)(cid:12)(cid:12)(cid:12) P n − i =0 M ( k ) ki ( X ) n − k (cid:12)(cid:12)(cid:12)(cid:12) < ǫ This means exactly that lim n →∞ P n − i =0 M ( k ) ki ( X ) n = 12 k which says that blocks of the string σ , in the limit, appear in the values of X withprobability k .Because σ was arbitrary, this tells us that the probability of an arbitrary stringstarting at multiples of its length in the values of X is consistent with a normaldistribution. However, we need that the string occurs with the same probabilityat all offsets , not just in blocks. This is easily shown, though, as we can considerrandom variables of the form M ( k ) ik + n , for n = 0 , , . . . , k −
1. We can perform thesame argument for these random variables as we did for the original ones, and wewill arrive at the conclusion that the probability of an arbitrary string starting atany offset in the values of X is also consistent with the expected value, telling usthat X is normal. (cid:3) When we consider elements of 2 N that are just weakly random, however, we donot have the same result. In fact, we do not even have that weakly random numbersobey the SLLN which, as was mentioned, is a weaker condition than normality. Lemma 3.1.
There exist X ∈ N that are weakly random and do not obey theStrong Law of Large Numbers.Remark. The technique that will be used to construct a counterexample is that of finite approximation.
The idea is to describe the bitstrings that are initial segmentsof an element of 2 N inductively, fulfilling some condition at every step and lettingthe element be the union of the bitstrings described in the construction process. Proof.
We proceed by finite approximation, constructing a sequence that does notbelong to any null Π set and also has its values weighted so that their average doesnot approach . Stage 0:
Let σ = <> . MATTHEW PANCIA
Stage n : Case 1:
Check if there exists σ ⊃ σ n − such that ϕ (1) ,σn (0) ↓ . In thiscase, let σ n = σ . Case 2:
Not Case 1. Let σ n = σ . Stage n + 1 : Case 1:
Check if P | σn |− i =0 σ n ( i ) | σ n | < . If so, then extend σ n with astring of 1’s long enough to make it so that P | σn |− i =0 σ n ( i ) | σ n | ≥ . Case 2:
Not Case 1. Let σ n +1 = σ a n < > .Let X = ∪ ∞ n =0 σ n . We then have that X does not obey the SLLN by construc-tion, and is also weakly random. To see why X is weakly random, consider whathappened at stage n of the construction. If an extension existed that made ϕ (1) ,σn defined, we used it, and so X will not belong to the Π set determined by the index n, U n . If such an extension did not exist, then we must know that U n is not a nullset, as it is undefined on some neighborhood. We then have that X does not belongto any Π null set and is therefore weakly random. (cid:3) Corollary.
There exist X ∈ N that are weakly random and whose digits are notnormal. The Law of the Iterated Logarithm.
In this section we will prove the Lawof the Iterated Logarithm, which gives a convergence rate for the average value ofa sequence of i.i.d. random variables. We first state it in its standard form as itapplies to the Cantor space.
Theorem 3.4 (Law of the Iterated Logarithm: Standard Form) . Let X ∈ N thenfor the sum S n we have we have lim sup n →∞ | S n − n | p n log log n = 1 with probability 1. This is clearly a strong statistical property that we would hope would followfrom a proper definition of randomness. In the case where we are using Martin-L¨ofrandomness, this is true, as the following shows. The proof follows a standard ap-proach in probability theory (taken from [3]), adapted to a computablility-theoreticcontext (a sketch of which is contained in [1]).
Theorem 3.5 (Law of the Iterated Logarithm: Effective Form) . Suppose X ∈ N is random. Letting S n = P n − i =0 X ( i ) , we have that lim sup n →∞ S n − n p n log log n = 1 . That is: (1)
For all λ > there exists an N such that for all n > N (1) S n ≤ n λ r n n. (2) For all λ < there exist infinitely many n such that (2) S n > n λ r n n. TATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOM SEQUENCES 9
We have that if X is random, then ¬ X is random ( ¬ X obtained by flipping all thevalues of X ). Then, by symmetry, the theorem implies that (3) lim inf n →∞ S n − n p n log log n = − and therefore (4) lim sup n →∞ | S n − n | p n log log n = 1 . Note.
In the following proof we have to take care to only deal with computablenumbers, but this is not a worry as computable numbers are dense and so we canchoose an arbitrarily good computable approximation to a non-computable numberif we are given one.
Proof of (1).
Fix a computable λ >
1, let γ be a computable number between 1and λ and let n r be the integer nearest to γ r . Define S n as before and let B r bethe set B r = (cid:26) X ∈ N : S n − n > λ r n r n r for some n r ≤ n ≤ n r +1 (cid:27) .B r is clearly a uniform sequence of Σ sets, and by Solovay’s Lemma it suffices that P ∞ r =0 µ ( B r ) converges in order to prove the claim. We have that by Lemma 2.2: µ ( B r ) ≤ c µ (cid:18)(cid:26) X ∈ N : S n r +1 − n r +1 > λ r n r n r (cid:27)(cid:19) = 1 c µ X ∈ N : S n r +1 − n r +1 q n r +1 > λ r n r n r +1 log log n r = 1 c µ (cid:18)(cid:26) X ∈ N : S ∗ n r +1 > λ r n r n r +1 log log n r (cid:27)(cid:19) Because n r +1 n r ∼ γ < λ , we can choose r large enough so that µ ( B r ) ≤ c µ (cid:16)n X ∈ N : S ∗ n r +1 > p λ log log n r o(cid:17) . And so, by Theorem 2.3 we can choose r large enough so that µ ( B r ) ≤ c exp( − λ log log n r ) = 1 c (log n r ) λ We have that c (log n r ) λ is arbitrarily close to c ( r log γ ) λ and so for any given k we canchoose an s such that P ∞ r = s µ ( B r ) < − k and so P ∞ r =0 µ ( B r ) not only converges, butis a computable number. The result follows, then, from the first part of Solovay’sLemma. (cid:3) Proof of (2).
As before, let λ < η > λ such that(5) 1 − η < (cid:18) η − λ (cid:19) and choose γ ∈ N such that γ − γ > η > λ. Set n r = γ r and let(6) D r = S n r − S n r − We have that D r represents the value of S n , but only considered in blocks, so D r n r − n r − represents the average value of S n in the block of trials between n r − and n r . It is clear that the following sets, A r , will correspond to independent events(from a probabilistic standpoint), as what occurs in D r is not affected by D k for k = r .(7) A r = { X ∈ N : D r − n r − n r − > η r n r n r } . We have that µ ( A r ) = µ X ∈ N : D r − ( n r − n r − ) q ( n r − n r − ) > η r n r n r − n r − log log n r . Note that n r / ( n r − n r − ) = γ/ ( γ − < η − , so µ ( A r ) ≥ µ X ∈ N : D r − ( n r − n r − ) q ( n r − n r − ) > p η log log n r = µ (cid:16)n X ∈ N : D ∗ r > p η log log n r o(cid:17) Again, by Theorem 2.3 we can choose r great enough so that µ ( A r ) > n r exp( − η log log n r ) = 1(log log n r )(log n r ) η . Because n r = γ r and η <
1, we can choose r large enough so that µ ( A r ) > r andso the sum will diverge. From the second part of Solovay’s Lemma, then, we havethat the theorem holds when we consider D n instead of S n .It then remains to show that the S n r − term in (6) can be neglected, so that wecan get the right form of the law.From the first part of the theorem, (choosing λ = 2) we have that there is an N such that for r > N and all random X (8) (cid:12)(cid:12)(cid:12) S n r − − n r − (cid:12)(cid:12)(cid:12) < r n r − n r − . Choose η as in (5), and then we will have n r − = n r γ < n r ( η − λ and so by (8)(9) S n r − − n r − > − ( η − λ ) r n r n r Adding (9) to the condition on the set in (7) (which we know holds infinitely oftenfor all random X ) we obtain S n r − S n r − − n r − n r − S n r − − n r − > ( η − ( η − λ )) r n r n r S n r − n r >λ r n r n r which is (2) with n = n r . It follows that for all random X the inequality holds forinfinitely many r and so the second part of the theorem is established. (cid:3) TATISTICAL PROPERTIES OF MARTIN-L ¨OF RANDOM SEQUENCES 11 Concluding Remarks
As the above results show, there are several powerful advantages of the Martin-L¨of definition of randomness.Firstly, we have Solovay’s Lemma which enables us to prove a great many resultsfrom probability theory that generally hold for i.i.d. random variables. The factthat a proof from probability theory can be copied nearly verbatim (as in the proofof the Law of the Iterated Logarithm above) and applied to Martin-L¨of randomnessis a convincing argument for its usefulness. Because of this correspondence, it seemslikely that other results from probability theory such as the Central Limit Theoremcould be adapted to a computability-theoretic context and proven.Secondly, we have that, by a theorem of Schnorr (see [6]), the Martin-L¨of defini-tion is precisely equivalent to requiring that random elements be “incompressible”in terms of their informational content. This is not only an intuitively satisfyingproperty but a useful one, giving us more tools to work with random elements.While it was not discussed here, it is good to note that there are other such equiv-alencies involving randomness in the context of LR/LK reducibility.Finally, we see that Martin-L¨of randomness is (at least within the hierarchy ofarithmetical randomness we presented) the weakest randomness that “gets the jobdone” of satisfying our intuitive requirements of randomness. We would, of course,like to use the weakest definition necessary to do this. As we have seen, weaklyrandom elements do not have the nice properties that random elements do, whilestrongly random elements will . This shows us that weak randomness is insufficient,while strong randomness is, in a sense, overkill.It must be admitted that there are some properties of Martin-L¨of random num-bers that do not coincide with the intuitive expectations of randomness. For exam-ple, we can exhibit random sequences that are low (their Turing jumps are equiva-lent to the Halting Problem) as well as random sequences that are hyperimmune-free(all sequences Turing reducible to the original sequence are bounded recursively).Despite this, though, the other properties of Martin-L¨of random numbers providea strong argument for this definition of randomness being a good one.
References [1]
G Davie , The Borel-Cantelli Lemmas, Probability Laws and Kolmogorov Complexity , TheAnnals of Probability (2001)[2]
M ´Emile Borel , Les probabilit´es d´enombrables et leurs applications arithm´etiques , Rendicontidel Circolo Matematico di Palermo (1884-1940) 27 (1909) 247–271[3]
W Feller , An Introduction to Probability Theory and Its Applications , John Wiley & Sons,Inc, New York, NY (1950)[4]
S Homer , A Selman , Computability and Complexity Theory , Springer (2001)[5]
D Khoshnevisan , Normal numbers are normal , Clay Mathematics Institute Annual Report15 (2006) 27–31[6]
S G Simpson , Almost everywhere domination and superhighness , Mathematical Logic Quar-terly 53 (2007) 462–482[7]
V G Vovk , The law of the iterated logarithm for random Kolmogorov, or chaotic, sequences ,Theory of Probability & Its Applications 32 (1988) 413–425
Department of Mathematics, University of Texas at Austin, Austin TX, 78712
E-mail address : [email protected] URL ::