Generalization of van Lambalgen's theorem and blind randomness for conditional probabilities
aa r X i v : . [ m a t h . L O ] J a n Generalization of van Lambalgen’s theorem andblind randomness for conditional probabilities
Hayato Takahashi ∗ February 27, 2018
Abstract
Generalization of the Lambalgen’s theorem is studied with the no-tion of Hippocratic (blind) randomness without assuming computabil-ity of conditional probabilities. In [Bauwence 2014], a counter-examplefor the generalization of Lambalgen’s theorem is shown when the con-ditional probability is not computable. In this paper, it is shown that(i) finiteness of martingale for blind randomness, (ii) classification oftwo blind randomness by likelihood ratio test, (iii) sufficient condi-tions for the generalization of the Lambalgen’s theorem, and (iv) anexample that satisfies the Lambalgen’s theorem but the conditionalprobabilities are not computable for all random parameters.
Lambalgen’s theorem (1987) [9] says that a pair of sequences ( x ∞ , y ∞ ) ∈ Ω is Martin-L¨of (ML) random w.r.t. the product measure of uniform measuresiff x ∞ is ML-random and y ∞ is ML-random relative to x ∞ , where Ω isthe set of infinite binary sequences. In this paper we study a generalizedform of the Lambalgen’s theorem using the notion of blind (Hippocratic)randomness [2, 4].Let S be the set of finite binary strings and ∆( s ) := { sx ∞ | x ∞ ∈ Ω } for s ∈ S , where sx ∞ is the concatenation of s and x ∞ . Let | s | be the lengthof s ∈ S . Let ∆( x, y ) := ∆( x ) × ∆( y ) for x, y ∈ S . In this paper, westudy probabilities on (Ω , B ) or (Ω , B Ω ), where B and B Ω are the Borel- σ -algebras generated from ∆( s ) , s ∈ S and ∆( x, y ) , ( x, y ) ∈ S , respectively. ∗ Tokyo Denki University, Department of Electrical and Electronic Engineering, 5 SenjuAsahi-cho, Adachi-ku, Tokyo 120-8551 Japan. Present address: Gifu University, [email protected]
1n the following we omit B or B Ω and write such as P on Ω or Ω , whenit is obvious from the context. For a probability P on Ω, we write P ( s ) := P (∆( s )) for s ∈ S . For a probability P on X × Y, X = Y = Ω, let P ( x, y ) := P (∆( x, y )) for x, y ∈ S . Let P X , P Y be the marginal distributions on X and Y , respectively, i.e., ∀ x, y P X ( x ) := P ( x, λ ) and P Y ( y ) := P ( λ, y ).For A ⊆ S and B ⊆ S , set ˜ A := ∪ s ∈ A ∆( s ) and ˜ B := ∪ ( x,y ) ∈ B ∆( x, y ),respectively. For x, y ∈ S , we write x ⊑ y if x is a prefix of y . Let N be theset of natural numbers. Let R P be the set of ML-random set w.r.t. P when P is computable and R P ( ·| y ∞ ) ,y ∞ be the set of ML-random set w.r.t. P ( ·| y ∞ )relative to y ∞ when P ( ·| y ∞ ) is computable relative to y ∞ , respectively.For A ⊆ Ω and y ∞ ∈ Ω, set A y ∞ := { x ∞ | ( x ∞ , y ∞ ) ∈ A } . Forexample, if P is a computable probability on Ω , we write R Py ∞ := { x ∞ | ( x ∞ , y ∞ ) ∈ R P } for y ∞ ∈ Ω.In Vovk and Vyugin (1993) [10], they generalized Lambalgen’s theoremas follows (actually they show a different form of the following theorem withparametric models, however the following form is easily derived from them).
Theorem 1.1 (Vovk and Vyugin [10])
Let P be a computable probabil-ity on X × Y, X = Y = Ω . Assume that(i) conditional probabilities exist for all parameters and(ii) they are uniformly computable for all parameters.Then R Py ∞ = R P ( ·| y ∞ ) ,y ∞ for all y ∞ ∈ R P Y . Here conditional probability P ( ·|· ) is called uniformly computable for allparameters if(i) there is a partial computable function A such that ∀ s ∈ S, y ∞ ∈ Ω , k ∈ N ∃ y ⊏ y ∞ , | P ( s | y ∞ ) − A ( s, y, k ) | < k and(ii) if A ( s, y, k ) is defined then A ( s, y, k ) = A ( s, y ′ , k ) for all y ′ ⊒ y .It is known that there are non-uniform computable conditional probabilities(Roy 2011 [5]).In [7], it is shown that conditional probabilities exist for all randomparameters, i.e., ∀ x ∈ S, y ∞ ∈ R P Y P ( x | y ∞ ) := lim y → y ∞ P ( x | y ) (the right-hand-side exist)(1)and P ( ·| y ∞ ) is a probability on (Ω , B ) for each y ∞ ∈ R P Y .For any fixed y ∞ ∈ Ω, P ( ·| y ∞ ) is called computable relative to y ∞ if(i) there is a partial computable function A such that2 s ∈ S, k ∈ N ∃ y ⊏ y ∞ , | P ( s | y ∞ ) − A ( s, y, k ) | < k and(ii) if A ( s, y, k ) is defined then A ( s, y, k ) = A ( s, y ′ , k ) for all y ′ ⊒ y .Note that in this definition, A may depend on y ∞ , however in the def-inition of uniform computability of conditional probability, we require thatthere is a global A that satisfies the conditions for all y ∞ .The next theorem shows that the generalized Lambalgen’s theorem holdsif the conditional probability is computable relative to the given parameter. Theorem 1.2 ([7, 8])
Let P be a computable probability on X × Y, X = Y = Ω . Fix y ∞ ∈ R P Y and assume that the conditional probability P ( ·| y ∞ ) is computable relative to y ∞ . Then R Py ∞ = R P ( ·| y ∞ ) ,y ∞ . (2)Conditional probabilities always exist for all random parameters, how-ever they may not be computable, see Theorem 2.4 and 2.5 below. In thispaper, we introduce the notion of blind (Hippocratic) randomness and studythe generalization of (2) when the conditional probability is not computable.Here blind randomness H is defined as follows. Let P be a probability onΩ. An r.e. set U ⊆ N × S is called blind test w.r.t. P if U n ⊇ U n +1 and P ( ˜ U n ) < − n , where U n := { x | ( n, x ) ∈ U } , for all n . The set of blindrandom sequences w.r.t. P (in the following we denote it by H P ) is the setthat is not covered by any limit of blind test, i.e., H P := ( ∪ U :blind test ∩ n ˜ U n ) c [2, 4]. Similarly a blind test U y ∞ w.r.t. P ( ·| y ∞ ) relative to y ∞ is an r.e. setrelative to y ∞ such that U y ∞ n ⊇ U y ∞ n +1 and P ( ˜ U y ∞ n | y ∞ ) < − n , where U y ∞ n := { x | ( n, x ) ∈ U y ∞ } , for all n . Let H P ( ·| y ∞ ) ,y ∞ be the set of blindrandom sequences w.r.t. P ( ·| y ∞ ) relative to y ∞ , i.e., H P ( ·| y ∞ ) ,y ∞ is the setthat is not covered by any limit of blind test w.r.t. the conditional probabil-ity relative to y ∞ . If a probability is not computable, the existence of theuniversal test is not assured, however the definitions above are still well de-fined. If P is computable, we have R P = H P , and if P ( ·| y ∞ ) is computablerelative to y ∞ , we have R P ( ·| y ∞ ) ,y ∞ = H P ( ·| y ∞ ) ,y ∞ . In the definition above,we can replace P ( ˜ U n ) < − n and P ( ˜ U y ∞ n | y ∞ ) < − n with P ( ˜ U n ) < f ( n )and P ( ˜ U y ∞ n | y ∞ ) < f ( n ), respectively, where f is a computable decreasingfunction.In [7], in the proof of ⊇ part in Theorem 1.2, computability of conditionalprobability is not assumed. Corollary 1.1 ([7])
Let P be a computable probability on Ω . Then R Py ∞ ⊇ H P ( ·| y ∞ ) ,y ∞ for all y ∞ ∈ R P Y . (3)3 Results
First we show a sufficient condition for the equality in (3). In the followingwe set a := ∞ if a = 0 else 0. Theorem 2.1
Let P be a computable probability on Ω . Fix a pair of se-quences ( x ∞ , y ∞ ) ∈ Ω . Assume that there are a computable probability Q on Ω and a partial computable function with oracle y ∞ , f y ∞ : S → Q suchthat (i) y ∞ ∈ R P Y ∩ R Q Y , (ii) Q ( ·| y ∞ ) is computable relative to y ∞ , (iii) ∀ x ⊏ x ∞ P ( x | y ∞ ) > , (iv) There is an infinite subset A ⊆ { x | x ⊏ x ∞ } such that sup x ∈ A f y ∞ ( x ) < ∞ and A ⊆ { x | f y ∞ ( x ) is defined } ⊆ { x | Q ( x | y ∞ ) P ( x | y ∞ ) < f y ∞ ( x ) < ∞} , and (v) 0 < inf x → x ∞ Q ( x | y ∞ ) P ( x | y ∞ ) .Then x ∞ ∈ R Py ∞ ⇔ x ∞ ∈ H P ( ·| y ∞ ) ,y ∞ . Proof) The proof is almost same with the proof of Theorem 3.3 in [8].Fix ( x ∞ , y ∞ ) that satisfies the condition of the theorem. As in the proofof Theorem 3.3 in [8], we expand a test w.r.t. P ( ·| y ∞ ) to a global testw.r.t. P . The problem here is that we do not assume the computability of P ( ·| y ∞ ). However from the condition of the theorem, we can approximatethe conditional probability with some computable function as follows.From (iv) and (v), let c and c be rational constants such that0 < c < inf x → x ∞ Q ( x | y ∞ ) P ( x | y ∞ ) and sup x ∈ A f y ∞ ( x ) < c < ∞ . (4)Let U ⊆ S be an r.e. set relative to y ∞ such that P ( ˜ U | y ∞ ) < − n c − and x ∞ ∈ ˜ U .
Let V := { x | ∃ z ∈ U z ⊑ x, f y ∞ ( x ) < c } and V ′ := { x ∈ U | Q ( x | y ∞ ) P ( x | y ∞ ) < c } . x ∞ ∈ ˜ V , V is r.e. relative to y ∞ , and ˜ V ⊆ ˜ V ′ . From (iv)and (4), we have Q ( ˜ V | y ∞ ) ≤ Q ( ˜ V ′ | y ∞ ) < c P ( ˜ U | y ∞ ) < − n . From Theorem 3.3 in [8] , there is an r.e. set W ⊆ S × S such that Q ( ˜ W ) < − n and ˜ W y ∞ = ˜ V . Let W ′ := { ( x, y ) ∈ W | P ( x | y ) 112 2 − n c − , and x ∞ ∈ ˜ W ′ y ∞ . Therefore if x ∞ is covered by a test w.r.t. P ( ·| y ∞ ) then ( x ∞ , y ∞ ) iscovered by a test w.r.t. P , which shows only if part of the theorem. The ifpart follows from Corollary 1.1. Next we show a classification of blind randomness for two different proba-bilities. For similar results for ML-randomness, see [3, 8]. Let P and Q beprobabilities on Ω. From martingale convergence theorem, we havelim x → x ∞ Q ( x ) P ( x ) < ∞ , a.s. − P. In [8], it is shown that martingale convergence theorem holds for individualML-random sequences for computable P , and the above inequality holdsfor them. In order to explore similar results for blind randomness withoutassuming computability of probabilities, we introduce a notion of approxi-mation.Let Z be the set of the integers and R be the set of real numbers, respec-tively. Let F n be the algebra generated from { ∆( x ) | | x | = n } then we have B = σ ( ∪ n F n ). Let r n : Ω → R be a measurable function w.r.t. F n . Since r n ( x ∞ ) takes a constant value on ∆( x ) for | x | = n, x ⊏ x ∞ , we write ∀ n ∈ N , x ∈ S, x ∞ ∈ Ω r n ( x ) := r n ( x ∞ ) if | x | = n, x ⊏ x ∞ . (5) there is a typos error in the proof of Theorem 3.3 in [8]; (i) k ≥ n + | x | should bechanged to k ≥ n + | x | +1 in equation (7) pp.188, line 1 and 3 pp.189., and (ii) inequality < should be changed to ≤ in equation (15) pp188, line 1 and 7 from bottom pp188. g : R → R be a strictly increasing function, i.e., ∀ x, y ∈ R g ( x ) Let P be a probability on Ω . Let { r n } n ∈ N be a non-negativesubmartingale w.r.t. P . If sup i E ( | r i | ) < ∞ and { r n } n ∈ N is strongly-effectively-approximable, then H P ⊆ { x ∞ | sup n r n ( x ∞ ) < ∞} . Proof) Let U m,n := { x | m < sup ≤ i ≤ n g ( r i ( x )) } ,U ′ m,n := { x | m < g ( sup ≤ i ≤ n r i ( x )) } ,V m,n := { x | m < sup ≤ i ≤ n f ( x, i ) } , and M m,n := { x | m < sup ≤ i ≤ n r i ( x ) } for all m, n ∈ N . Let V g ( m ) := ∪ n V g ( m ) ,n = { x | g ( m ) < sup i ∈ N f ( x, i ) } . Since f and g are computable, we see that { V g ( m ) } m ∈ N is an uniformlyr.e. set. Since g is increasing, from (6), we have ∃ c ∀ x, n sup ≤ i ≤ n f ( x, i ) ≤ sup ≤ i ≤ n g ( r i ( x )) = g ( sup ≤ i ≤ n r i ( x )) ≤ sup ≤ i ≤ n f ( x, i )+ c, and(7)6 c ∀ x sup i ∈ N f ( x, i ) ≤ g (sup i ∈ N r i ( x )) ≤ sup i ∈ N f ( x, i ) + c. (8)Then P ( ˜ V g ( m ) ,n ) ≤ P ( ˜ U g ( m ) ,n ) (9)= P ( ˜ U ′ g ( m ) ,n ) (10)= P ( ˜ M m,n ) (11) ≤ E ( | r n | ) /m, (12)where (9) follows from (7), (10) and (11) follows from that g is strictlyincreasing, and (12) follows from Doob’s submartingale inequality (for ex-ample, see [11] pp.137).Thus we have P ( ˜ V g ( m ) ) = lim n P ( ˜ V g ( m ) ,n ) ≤ sup n E ( | r n | ) /m. Since sup n E ( | r n | ) < ∞ , we see that { V g ( m ) } m ∈ N is a test. Let V ′ m := { x | m < sup i ∈ N r i ( x ) } . Since g is strictly increasing, from (8), we have ∩ m ˜ V g ( m ) = ∩ m ˜ V ′ m ⊆ ( H P ) c .For example, QP is log -effective-approximable if there is a computablefunction f : S → Z such that ∃ c ∀ x f ( x ) < log Q ( x ) P ( x ) < f ( x ) + c . Corollary 2.1 Let P and Q be probabilities on Ω . Suppose that QP isstrongly-effective-approximable. Then H P ⊆ { x ∞ | sup x → x ∞ Q ( x ) P ( x ) < ∞} . Proof) Since E ( Q/P ) ≤ 1, where the expectation is taken w.r.t. P , fromTheorem 2.2, we have the corollary.Let L := { x ∞ | inf x → x ∞ Q ( x ) P ( x ) > } . Since inf x → x ∞ Q ( x ) P ( x ) > ⇔ sup x → x ∞ P ( x ) Q ( x ) < ∞ , we see that there is a de-creasing function h : N → Q , i.e., ∀ n < m h ( n ) ≥ h ( m ), such that P ( L ∩ PQ > k ) < h ( k ) → k → ∞ ) . (13)7e say that PQ is effectively bounded in probability if there is a computable h in (13). Lemma 2.1 Let P and Q be probabilities on Ω .(a) If QP is effectively-approximable and PQ is effectively bounded in probabilitythen H P ⊆ H Q ∪ L c .(b) If P ( L ) = 1 and PQ is effectively bounded in probability, we have H P ⊆H Q .(c) ∃ c, c ′ ∀ x < c < Q ( x ) P ( x ) < c ′ < ∞ ⇒ H P = H Q . Proof) (a) Let { V n } be a test w.r.t. Q and Q ( ˜ V n ) < n − for all n . For n, m ∈ N , let L m,n := { x | P ( x ) Q ( x ) < m, | x | = n } and T m,n := { x | f ( x, n ) < m, | x | = n } . From (6), there is a constant c and strictly increasing g such that ∀ n, m T m − c,n ⊆ L g − ( m ) ,n ⊆ T m,n . Thus we have ∪ m ∩ n ˜ L m,n = ∪ m ∩ n ˜ T m,n = L. Let V mn := V n ∩ T m,n then P ( ˜ V mn ) = P ( ˜ V mn ∩ { PQ > k } ) + P ( ˜ V mn ∩ { PQ ≤ k } ) ≤ h ( k ) + kQ ( ˜ V n ) < h ( n ) + n − , where k = n in the last inequality. Since h is computable, there is a com-putable l : N → N such that P n h ( l ( n )) + l ( n ) − < ∞ . Since f is com-putable, { V ml ( n ) } n ∈ N is an uniformly r.e. set and from Solovay’s theorem (see[6]), we have lim sup n ˜ V ml ( n ) ⊆ ( H P ) c . Thus ∩ n ˜ V n ∩ ( ∩ n ˜ L m,n ) ⊆ lim sup n ˜ V mn ⊆ lim sup n ˜ V ml ( n ) ⊆ ( H P ) c . Since the above equation holds for all m ∈ N and for any test V w.r.t. Q ,we have( H Q ) c ∩ L = ∪ m ∪ V : test w.r.t. Q ( ∩ n ˜ V n ∩ ( ∩ n ˜ L m,n )) ⊆ ( H P ) c . { V n } n ∈ N be an uniformly r.e. set such that Q ( ˜ V n ) < n − for all n .Since P ( L ) = 1, we have P ( ˜ V n ) ≤ P ( ˜ V n ∩ PQ > k ) + P ( ˜ V n ∩ PQ ≤ k ) < h ( k ) + kQ ( ˜ V n ) = h ( n ) + n − , where k = n . Since h is computable, we see that if { V n } n ∈ N is a blind-testw.r.t. Q , it is a blind-test w.r.t. P .(c) This follows immediately from (b). Theorem 2.3 Let P and Q be probabilities on Ω . If QP is effectively-approximable and PQ is effectively bounded in probability then H P ∩ H Q = H P ∩ { x ∞ | < inf x → x ∞ Q ( x ) P ( x ) } , and H P ∩ ( H Q ) c = H P ∩ { x ∞ | x → x ∞ Q ( x ) P ( x ) } . Proof) Let L := { x ∞ | inf x → x ∞ Q ( x ) P ( x ) > } . Then H P ∩ H Q ⊆ H P ∩ L (14) ⊆ ( H P ∩ L ) ∩ ( H Q ∪ L c ) (15)= H P ∩ H Q ∩ L ⊆ H P ∩ H Q , where (14) follows from Corollary 2.1 and (15) follows from Lemma 2.1 (a).The second equation follows from the first one.Note that if P and Q are computable, we have R P ∩ R Q = R P ∩ { x ∞ | < inf x ⊏ x ∞ Q ( x ) /P ( x ) } , see [3, 8]. We can relative the results above, i.e.,for any y ∞ ∈ Ω, we can replace H P and H Q with H P,y ∞ and H Q,y ∞ in Corol-lary 2.1 and Theorem 2.3, respectively. If two conditions in Theorem 2.3 aresatisfied for conditional probabilities P ( ·| y ∞ ) and Q ( ·| y ∞ ), we can replacethe condition (vi) in Theorem 2.1 with x ∞ ∈ H P ( ·| y ∞ ) ,y ∞ ∩ R Q ( ·| y ∞ ) ,y ∞ . Next we show an example that holds equality in (3) even if the conditionalprobability is not computable for all random parameters.9 heorem 2.4 There is a computable probability P on X × Y, X = Y = Ω such that for all y ∞ ∈ R P Y ,(a) P ( ·| y ∞ ) is not computable and (b) R Py ∞ = H P ( ·| y ∞ ) ,y ∞ . Proof) We construct a computable probability P on Ω such that R P = R Q ,where Q is the product of uniform probabilities, i.e., Q ( x, y ) = Q X ( x ) Q Y ( y ) =2 − ( | x | + | y | ) for all x, y ∈ S and P X = Q X , P Y = Q Y . Let e , e , . . . bean enumeration of partial computable functions. Let ∆ = ∆(0) , ∆ =∆(10) , ∆ = ∆(110) , . . . and we have Ω \ { ∞ } = ∪ n ∆ n . We construct P such that for all n and y ∞ ∈ R P Y , P (∆ n | y ∞ ) = P X (∆ n ) ⇔ e n halts with oracle y ∞ . (16)Observe that there is a partial computable function U and a total com-putable function f such that for all n and y ∞ , e n halts with oracle y ∞ ⇔∃ y ⊏ y ∞ U ( n, y ) halts ⇔∃ y ⊏ y ∞ ∃ t f ( n, y, t ) = 0 ⇔∃ y ⊏ y ∞ f ( n, y, | y | ) = 0 . Here if U ( n, y ) halts for some n, y then U ( n, z ) halts for all extension z of y , and f ( n, y, t ) = 0 for some n, y, t then f ( n, z, l ) = 0 for all extension z of y and t ≤ l . Intuitively, the argument t of f ( n, y, t ) is the number of stepsof the computation of U ( n, y ). Let P ( x | λ ) = P X ( x ) = Q X ( x ) = 2 −| x | and P Y ( y ) = Q Y ( y ) = 2 −| y | for all x, y ∈ S . Let 0 < ǫ < P (∆ n | y 0) := ( P (∆ n | y )(1 − ǫ ) if f ( n, y, | y | ) = 0 and f ( n, y ′ , | y ′ | ) = 0 P (∆ n | y ) else, and P (∆ n | y 1) := ( P (∆ n | y )(1 + ǫ ) if f ( n, y, | y | ) = 0 and f ( n, y ′ , | y ′ | ) = 0 P (∆ n | y ) else.Here | y | is the length of y and y ′ = y . . . y | y |− . By induction, we see that P (∆ n , y ) = P (∆ n | y P ( y 0) + P (∆ n | y P ( y 1) for all n, y . If ∆( x ) ⊆ ∆ n and | x | ≥ n then let P ( x | y ) = 2 n −| x | P (∆ n | y ). Then P is consistently defined, i.e., P ( x, y ) = P i,j ∈{ , } P ( xi, yj ) for all x, y ∈ S , and P X ( x ) = Q X ( x ) , P Y ( y ) = Q Y ( y ). Since f is computable, we see that P is computable. By constructionwe have (16) and the conditional probabilities are not computable for all10 ∞ ∈ R P Y . We have ∀ x, y (1 − ǫ ) ≤ P ( x, y ) Q ( x, y ) ≤ (1 + ǫ ) , (17) ∀ x (1 − ǫ ) ≤ P ( x | y ∞ ) Q ( x | y ∞ ) ≤ (1 + ǫ ) . (18)Hence ( x ∞ , y ∞ ) ∈ R P ⇔ ( x ∞ , y ∞ ) ∈ R Q (19) ⇔ y ∞ ∈ R Q Y , x ∞ ∈ R Q ( ·| y ∞ ) ,y ∞ (20) ⇔ y ∞ ∈ R P Y , x ∞ ∈ H P ( ·| y ∞ ) ,y ∞ , (21)where (19) follows from Lemma 2.1 (c) and (17), (20) follows from Lambal-gen’s theorem, and (21) follows from (relativized version of ) Lemma 2.1 (c)and (18).The following theorem shows an example that equality does not hold in(3). Theorem 2.5 (Bauwens [1]) There is a computable probability P on X × Y, X = Y = Ω and y ∞ ∈ R P Y such that(a) P ( ·| y ∞ ) is not computable and (b) R Py ∞ = H P ( ·| y ∞ ) ,y ∞ . Finally we give another sufficient condition to the equality in (3), which re-quires a condition related to the convergence rate of conditional probability,however it does not require explicitly the existence of another computableconditional probability as in Theorem 2.1. Lemma 2.2 Let y ∞ ∈ R P Y . If A is r.e. relative to y ∞ and P ( ˜ A | y ∞ ) < ǫ then there are uniformly r.e. sets U , U , . . . ⊆ S × S such that ∪ n ( ˜ U n r ˜ U n +1 ) ∩ lim inf n ˜ U n = ∅ , (22)( ˜ U n r ˜ U n +1 ) ∩ ( ˜ U m r ˜ U m +1 ) = ∅ for n = m, (23) g ∪ n U = ∪ n ˜ U n = ∪ n ( ˜ U n r ˜ U n +1 ) ∪ lim inf n ˜ U n , and (24)˜ A = (lim inf n ˜ U n ) y ∞ . (25)11roof) Since A is r.e. relative to y ∞ , there is a partial computable B : N × S → S such that (i) if B ( i, y ) is defined then B ( i, y ) = B ( i, y ′ ) forall y ′ ⊒ y and (ii) A = { x | ∃ i ∈ N , y ⊏ y ∞ B ( i, y ) = x } . Let T := { ( x, y ) | ∃ i B ( i, y ) = x } . Since T is r.e., there is a non-overlappingr.e. W such that ˜ T = ˜ W . Here W is called non-overlapping if ∆( x, y ) ∩ ∆( x ′ , y ′ ) = ∅ for ( x, y ) , ( x ′ , y ′ ) ∈ W and ( x, y ) = ( x ′ , y ′ ), see [8]. Let W := { ( x , y ) , ( x , y ) , . . . } be a recursive enumeration, W n := { ( x , y ) , . . . , ( x n , y n ) } ,and V n = { y ∈ S | ∆( y ) ⊆ ∩ ≤ i ≤ n C i , C i ∈ { ∆( y i ) , ∆( y i ) c } , ≤ i ≤ n } . V n is the partition generated from ∆( y i ) , i = 1 , . . . , n . For A ⊆ S , let A y := { x | ∃ z ( x, z ) ∈ A, z ⊑ y } . For example, W n,y = { x | ∃ z ( x, z ) ∈ W n , z ⊑ y } .Set U n := { ( x, y ) | X x ∈ W n,y P ( x | y ) < ǫ, y ∈ V n } . (26)Let x ∈ A r B ↔ x ∈ A ∧ x / ∈ B for sets A and B . Since lim inf n ˜ U n = ∪ m ∩ n ≥ m ˜ U n , we have (22) and (24).Next we show that for n, m ∈ N , ∃ open set O n ˜ U n = ˜ U n ∩ (Ω × O n ) , (27)˜ U n r ˜ U n +1 = ˜ U n ∩ (Ω × D n ) , where D n := O n r O n +1 , and (28) D n ∩ D m = ∅ for n = m. (29)Let O n := { y ∞ | ˜ U n,y ∞ = ∅} for n ∈ N then from (26) and P ( ·| y ) convergesas y → y ∞ ∈ R P Y (see [7]), we see that O n is open. Since W n ⊆ W m for n < m , we have y ∞ ∈ O n ∩ O m ⇔ ˜ U m,y ∞ ⊇ ˜ U n,y ∞ = ∅ , (30)and (27) and (28) hold. Suppose that D n ∩ D m = ∅ for n < m . Since y ∞ ∈ D n ∩ D m ⊆ O n ∩ O m , we have ∅ 6 = W n,y ⊆ W n +1 ,y ⊆ W m,y . Thus wehave y ∞ ∈ O n +1 and y ∞ / ∈ D n , which is a contradiction, and we have (29)and (23).Since lim y → y ∞ P ( ˜ B | y ) = P ( ˜ B | y ∞ ) for finite set B and y ∞ ∈ R P Y , wehave if P ( ˜ B | y ∞ ) < ǫ then there is y ⊏ y ∞ such that P ( ˜ B | y ) < ǫ . Thus wehave A ⊆ (lim inf n ˜ U n ) y ∞ ⊆ ( ∪ n ˜ U n ) y ∞ ⊆ ˜ W y ∞ = A , and we have (25).From (23), we see that there is f : Q → N such that ∀ ǫ > P ( ∪ f ( ǫ ) ≤ n ˜ U n r ˜ U n +1 ) < ǫ. (31) Theorem 2.6 Fix y ∞ ∈ R P Y . Assume that there is a computable f thatsatisfies (31) for any ǫ ′ > and r.e. relative to y ∞ , A such that P ( ˜ A | y ∞ ) < ′ in Lemma 2.2. Then R Py ∞ = H P ( ·| y ∞ ) ,y ∞ . Proof) If A is r.e. relative to y ∞ such that P ( ˜ A | y ∞ ) < ǫ ′ then ∪ f ( ǫ ) ≤ n U n isr.e. and P ( ∪ f ( ǫ ) ≤ n ˜ U n ) = P ( ∪ f ( ǫ ) ≤ n ( ˜ U n \ ˜ U n +1 ) ∪ lim inf n ˜ U n ) < ǫ ′ . Thus wesee that if A is a test w.r.t. P ( ·| y ∞ ) then it is covered by a test w.r.t. P .AcknowledgementThis paper is based on the work when the author visited LIRMM Montpel-lier France. The author thanks Prof. A. Shen (LIRMM France), Prof. A. Ro-mashchenko (LIRMM France), Prof. Bruno Bauwens (Nancy France), Prof. TeturoKamae (Osaka city univ.), Prof. Hiroshi Sugita (Osaka univ.), and Prof. AkioFujiwara (Osaka univ.) for discussions and comments. A part of work issupported by JSPS KAKENHI Grant number 24540153. References [1] B. Bauwens, 2014. preprint.[2] L. Bienvenu, P. Gacs, M. Hoyrup, C. Rojas, and A. Shen. Algorithmic testsand randomness with respect to a class of measures, 2011. arXiv:1103.1529v2.[3] L. Bienvenu and W. Merkle. Effective randomness for computable probabilitymeasures. Electron. Notes Theor. Comput. Sci. , 167:117–130, 2007.[4] Bjørn Kjos Hanssen. The probability distribution as a computational resourcefor randomness testing. Journal of Logic and Analysis , 2(10):1–13, 2010.[5] D. M. Roy. Computability, inference and modeling in probabilistic program-ming . PhD thesis, MIT, 2011.[6] A. Kh. Shen. On relations between different algorithmic definitions of ran-domness. Soviet Math. Dokl. , 38(2):316–319, 1989.[7] H. Takahashi. On a definition of random sequences with respect to conditionalprobability. Inform. and Compt. , 206:1375–1382, 2008.[8] H. Takahashi. Algorithmic randomness and monotone complexity on productspace. Inform. and Compt. , 209:183–197, 2011.[9] M. van Lambalgen. Random sequences . PhD thesis, Universiteit van Amster-dam, 1987.[10] V. G. Vovk and V. V. V’yugin. On the empirical validity of the Bayesianmethod. J. R. Stat. Soc. B , 55(1):253–266, 1993.[11] D. Williams. Probability with Martingale . Cambridge university press, Cam-bridge, 1991.. Cambridge university press, Cam-bridge, 1991.