Coarse Reducibility and Algorithmic Randomness
Denis R. Hirschfeldt, Carl G. Jockusch Jr., Rutger Kuyper, Paul E. Schupp
aa r X i v : . [ m a t h . L O ] M a y COARSE REDUCIBILITY ANDALGORITHMIC RANDOMNESS
DENIS R. HIRSCHFELDT, CARL G. JOCKUSCH, JR., RUTGER KUYPER,AND PAUL E. SCHUPP
Abstract. A coarse description of a set A ⊆ ω is a set D ⊆ ω such that the symmetric difference of A and D has asymptotic den-sity 0. We study the extent to which noncomputable information canbe effectively recovered from all coarse descriptions of a given set A ,especially when A is effectively random in some sense. We show thatif A is 1-random and B is computable from every coarse description D of A , then B is K -trivial, which implies that if A is in fact weakly2-random then B is computable. Our main tool is a kind of compact-ness theorem for cone-avoiding descriptions, which also allows us toprove the same result for 1-genericity in place of weak 2-randomness.In the other direction, we show that if A T ∅ ′ is a 1-random set,then there is a noncomputable c.e. set computable from every coarsedescription of A , but that not all K -trivial sets are computable fromevery coarse description of some 1-random set. We study both uniformand nonuniform notions of coarse reducibility. A set Y is uniformlycoarsely reducible to X if there is a Turing functional Φ such that if D is a coarse description of X , then Φ D is a coarse description of Y . Aset B is nonuniformly coarsely reducible to A if every coarse descrip-tion of A computes a coarse description of B . We show that a certainnatural embedding of the Turing degrees into the coarse degrees (bothuniform and nonuniform) is not surjective. We also show that if twosets are mutually weakly 3-random, then their coarse degrees form aminimal pair, in both the uniform and nonuniform cases, but that thesame is not true of every pair of relatively 2-random sets, at least inthe nonuniform coarse degrees. Date : August 27, 2018.2010
Mathematics Subject Classification.
Primary 03D30; Secondary 03D28, 03D32.
Key words and phrases.
Coarse reducibility, algorithmic randomness, K -triviality.Hirschfeldt was partially supported by grant DMS-1101458 from the National ScienceFoundation of the United States.Kuyper’s research was supported by NWO/DIAMANT grant 613.009.011 and byJohn Templeton Foundation grant 15619: “Mind, Mechanism and Mathematics: TuringCentenary Research Project”. Introduction
There are many natural problems with high worst-case complexity thatare nevertheless easy to solve in most instances. The notion of “generic-case complexity” was introduced by Kapovich, Myasnikov, Schupp, andShpilrain [14] as a notion that is more tractable than average-case complex-ity but still allows a somewhat nuanced analysis of such problems. Thatpaper also introduced the idea of generic computability, which captures theidea of having a partial algorithm that correctly computes A ( n ) for “al-most all” n , while never giving an incorrect answer. Jockusch and Schupp[13] began the general computability theoretic investigation of generic com-putability and also defined the idea of coarse computability, which capturesthe idea of having a total algorithm that always answers and may makemistakes, but correctly computes A ( n ) for “almost all” n . We are hereconcerned with this latter concept. We first need a good notion of “almostall” natural numbers. Definition 1.1.
Let A ⊆ ω . The density of A below n , denoted by ρ n ( A ),is | A ↾ n | n . The upper density ρ ( A ) of A is lim sup n ρ n ( A ). The lower density ρ ( A ) of A is lim inf n ρ n ( A ). If ρ ( A ) = ρ ( A ) then we call this quantity the density of A , and denote it by ρ ( A ).We say that D is a coarse description of X if ρ ( D △ X ) = 0, where △ denotes symmetric difference. A set X is coarsely computable if it has acomputable coarse description.This idea leads to natural notions of reducibility. Definition 1.2.
We say that Y is uniformly coarsely reducible to X , andwrite Y uc X , if there is a Turing functional Φ such that if D is a coarsedescription of X , then Φ D is a coarse description of Y . This reducibilityinduces an equivalence relation ≡ uc on 2 ω . We call the equivalence classof X under this relation the uniform coarse degree of X .Uniform coarse reducibility, generic reducibility (defined in [13]), andseveral related reducibilities have been termed notions of robust infor-mation coding by Dzhafarov and Igusa [7]. Work on such notions hasmainly focused on their uniform versions. (One exception is a result onnonuniform ii-reducibility in Hirschfeldt and Jockusch [9].) However, theirnonuniform versions also seem to be of interest. In particular, we will workwith the following nonuniform version of coarse reducibility. Definition 1.3.
We say that Y is nonuniformly coarsely reducible to X ,and write Y nc X , if every coarse description of X computes a coarsedescription of Y . This reducibility induces an equivalence relation ≡ nc on2 ω . We call the equivalence class of X under this relation the nonuniformcoarse degree of X . OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 3
Note that the coarsely computable sets form the least degree in boththe uniform and nonuniform coarse degrees. Uniform coarse reducibilityclearly implies nonuniform coarse reducibility. We will show in the nextsection that, as one might expect, the converse fails. The development ofthe theory of notions of robust information coding and related conceptshave led to interactions with computability theory (as in Jockusch andSchupp [13]; Downey, Jockusch, and Schupp [4]; Downey, Jockusch, Mc-Nicholl, and Schupp [5]; and Hirschfeldt, Jockusch, McNicholl, and Schupp[10]), reverse mathematics (as in Dzhafarov and Igusa [7] and Hirschfeldtand Jockusch [9]), and algorithmic randomness (as in Astor [1]).In this paper, we investigate connections between coarse reducibility andalgorithmic randomness. In Section 2, we describe natural embeddings ofthe Turing degrees into the uniform and nonuniform coarse degrees, anddiscuss some of their basic properties. In Section 3, we show that noweakly 2-random set can be in the images of these embeddings by show-ing that if X is weakly 2-random and A is noncomputable, then there issome coarse description of X that does not compute A . More generally,we show that if X is 1-random and A is computable from every coarse de-scription of X , then A is K -trivial. Our main tool is a kind of compactnesstheorem for cone-avoiding descriptions. We also show that there do existnoncomputable sets computable from every coarse description of some 1-random set, but that not all K -trivial sets have this property. In Section4, we give further examples of classes of sets that cannot be in the imagesof our embeddings. In Section 5, we show that if two sets are relativelyweakly 3-random then their coarse degrees form a minimal pair, in boththe uniform and nonuniform cases, but that, at least for the nonuniformcoarse degrees, the same is not true of every pair of relatively 2-randomsets. These results are analogous to the fact that, for the Turing degrees,two relatively weakly 2-random sets always form a minimal pair, but tworelatively 1-random sets may not. In Section 6, we conclude with a fewopen questions.We assume familiarity with basic notions of computability theory (as in[22]) and algorithmic randomness (as in [3] or [19]). For S ⊆ <ω , we write J S K for the open subset of 2 ω generated by S ; that is, J S K = { X : ∃ n ( X ↾ n ∈ S ) } . We denote the uniform measure on 2 ω by µ .2. Coarsenings and embeddings of the Turing degrees
We can embed the Turing degrees into both the uniform and nonuniformcoarse degrees, and our first connection between coarse computability andalgorithmic randomness comes from considering such embeddings. Whilethere may be several ways to define such embeddings, a natural way toproceed is to define a map C : 2 ω → ω such that C ( A ) contains thesame information as A , but coded in a “coarsely robust” way. That is, we D. R. HIRSCHFELDT, C. G. JOCKUSCH, JR., R. KUYPER, AND P. E. SCHUPP would like C ( A ) to be computable from A , and A to be computable fromany coarse description of C ( A ).In the case of the uniform coarse degrees, one might think that the latterreduction should be uniform, but that condition would be too strong: IfΓ D = A for every coarse description D of C ( A ) then Γ σ ( n ) ↓ ⇒ Γ σ ( n ) = A ( n ) (since every string can be extended to a coarse description of C ( A )),which, together with the fact that for each n there is a σ such that Γ σ ( n ) ↓ ,implies that A is computable. Thus we relax the uniformity conditionslightly in the following definition. Definition 2.1.
A map C : 2 ω → ω is a coarsening if for each A we have C ( A ) T A , and for each coarse description D of C ( A ), we have A T D .A coarsening C is uniform if there is a binary Turing functional Γ with thefollowing properties for every coarse description D of C ( A ):1. Γ D is total.2. Let A s ( n ) = Γ D ( n, s ). Then A s = A for cofinitely many s . Proposition 2.2.
Let C and F be coarsenings and A and B be sets. Then B T A if and only if C ( B ) nc C ( A ) . If C is uniform then B T A if and only if C ( B ) uc C ( A ) . C ( A ) ≡ nc F ( A ) , and if C and F are both uniform then C ( A ) ≡ uc F ( A ) .Proof.
1. Suppose that C ( B ) nc C ( A ). Then C ( A ) computes a coarsedescription D of C ( B ). Thus B T D T C ( A ) T A .Now suppose that B T A and let D be a coarse description of C ( A ).Then C ( B ) T B T A T D . Thus C ( B ) nc C ( A ).2. Suppose that C is uniform and that B T A . Let D be a coarsedescription of C ( A ). Let A s be as in Definition 2.1, with D = D . Then C ( B ) T B T A , so let Φ be such that Φ A = C ( B ). Let X T D bedefined as follows. Given n , search for an s > n such that Φ A s ( n ) ↓ and let X ( n ) = Φ A s ( n ). (Note that such an s must exist.) Then X ( n ) = Φ A ( n ) = C ( B )( n ) for almost all n , so X is a coarse description of C ( B ). Since X is obtained uniformly from D , we have C ( B ) uc C ( A ). The conversefollows immediately from 1.3. Let D be a coarse description of F ( A ). Then C ( A ) T A T D .Thus C ( A ) nc F ( A ). By symmetry, C ( A ) ≡ nc F ( A ).4. If F is uniform then the same argument as in the proof of 2 showsthat we can obtain a coarse description of C ( A ) uniformly from D , whence C ( A ) uc F ( A ). If C is also uniform then C ( A ) ≡ uc F ( A ) by symmetry. (cid:3) Thus uniform coarsenings all induce the same natural embeddings. Itremains to show that uniform coarsenings exist. One example is given byDzhafarov and Igusa [7]. We give a similar example. Let I n = [ n ! , ( n + 1)!)and let I ( A ) = S n ∈ A I n ; this map first appeared in Jockusch and Schupp OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 5 [13]. Clearly I ( A ) T A , and it is easy to check that if D is a coarsedescription of I ( A ) then D computes A . Thus I is a coarsening.To construct a uniform coarsening, let H ( A ) = {h n, i i : n ∈ A ∧ i ∈ ω } and define E ( A ) = I ( H ( A )). The notation E denotes this particularcoarsening throughout the paper. Proposition 2.3.
The map E is a uniform coarsening.Proof. Clearly E ( A ) T A . Now let D be a coarse description of E ( A ). Let G = { m : | D ∩ I m | > | I m | } and let A s = { n : h n, s i ∈ G } . Then G = ∗ H ( A ),so A s = A for all but finitely many s , and the A s are obtained uniformlyfrom D . (cid:3) A first natural question is whether uniform coarse reducibility and non-uniform coarse reducibility are indeed different. We give a positive answerby showing that, unlike in the nonuniform case, the mappings E and I are not equivalent up to uniform coarse reducibility. Recall that a set X is autoreducible if there exists a Turing functional Φ such that for every n ∈ ω we have Φ X \{ n } ( n ) = X ( n ). Equivalently, we could require thatΦ not ask whether its input belongs to its oracle. We now introduce a∆ -version of this notion. Definition 2.4.
A set X is jump-autoreducible if there exists a Turingfunctional Φ such that for every n ∈ ω we have Φ ( X \{ n } ) ′ ( n ) = X ( n ). Proposition 2.5.
Let X be such that E ( X ) uc I ( X ) . Then X is jump-autoreducible.Proof. We must give a procedure for computing X ( n ) from ( X \ { n } ) ′ that is uniform in X . Given an oracle for X \ { n } , we can uniformlycompute I ( X \ { n } ). Now I ( X \ { n } ) = ∗ I ( X ), so I ( X \ { n } ) is a coarsedescription of I ( X ). Since E ( X ) uc I ( X ) by assumption, from I ( X \{ n } )we can uniformly compute a coarse description D of E ( X ). Since E is auniform coarsening by Proposition 2.3, from D we can uniformly obtainsets A , A , . . . with A s = X for all sufficiently large s . Composing thesevarious reductions, from X \{ n } we can uniformly compute sets A , A , . . . with A s = X for all sufficiently large s . Then from ( X \ { n } ) ′ we canuniformly compute lim s A s ( n ) = X ( n ), as needed. (cid:3) We will now show that 2-generic sets are not jump-autoreducible, whichwill give us a first example separating uniform coarse reducibility andnonuniform coarse reducibility. For this we first show that no 1-genericset is autoreducible, which is an easy exercise.
Proposition 2.6. If X is -generic, then X is not autoreducible.Proof. Suppose for the sake of a contradiction that X is 1-generic and isautoreducible via Φ. For a string σ , let σ − ( i ) be the set of n such that D. R. HIRSCHFELDT, C. G. JOCKUSCH, JR., R. KUYPER, AND P. E. SCHUPP σ ( n ) = i . If τ is a binary string, let τ \ { n } be the unique binary string µ of the same length such that µ − (1) = τ − (1) \ { n } . Let S be the setof strings τ such that Φ τ \{ n } ( n ) ↓ 6 = τ ( n ) ↓ for some n . Then S is a c.e.set of strings and X does not meet S . Since X is 1-generic, there is astring σ ≺ X that has no extension in S . Let n = | σ | , and let τ ≻ σ bea string such that Φ τ \{ n } ( n ) ↓ . Such a string τ exists because σ ≺ X andΦ witnesses that X is autoreducible. Furthermore, we may assume that τ ( n ) = Φ τ \{ n } , since changing the value of τ ( n ) does not affect any of theconditions in the choice of τ . Hence τ is an extension of σ and τ ∈ S ,which is the desired contradiction. (cid:3) Proposition 2.7. If X is -generic, then X is not jump-autoreducible.Proof. Since X is 2-generic, X is 1-generic relative to ∅ ′ . Hence, by rela-tivizing the proof of the previous proposition to ∅ ′ , we see that X is notautoreducible relative to ∅ ′ . However, the class of 1-generic sets is uni-formly GL , i.e., there exists a single Turing functional Ψ such that forevery 1-generic X we have Ψ X ⊕∅ ′ = X ′ , as can be verified by looking at theusual proof that every 1-generic is GL (see [12, Lemma 2.6]). Of course,if X is 1-generic, then X \ { n } is also 1-generic for every n . Thus froman oracle for ( X \ { n } ) ⊕ ∅ ′ we can uniformly compute ( X \ { n } ) ′ . Now,if X is jump-autoreducible, from ( X \ { n } ) ′ we can uniformly compute X ( n ). Composing these reductions shows that X ( n ) is uniformly com-putable from ( X \ { n } ) ⊕ ∅ ′ , which contradicts our previous remark that X is not autoreducible relative to ∅ ′ . (cid:3) Corollary 2.8. If X is -generic, then E ( X ) nc I ( X ) but E ( X ) (cid:10) uc I ( X ) .Proof. We know that E ( X ) nc I ( X ) from Proposition 2.2. The fact that E ( X ) (cid:10) uc I ( X ) follows from Propositions 2.5 and 2.7. (cid:3) It is natural to ask whether the same result holds for 2-random sets. Inthe proof above we used the fact that the 2-generic sets are uniformly GL .For 2-random sets this fact is almost true, as expressed by the followinglemma. The proof is adapted from Monin [18], where a generalization forhigher levels of randomness is proved. Let U , U , . . . be a fixed universalMartin-L¨of test relative to ∅ ′ . The 2 -randomness deficiency of a 2-random X is the least c such that X / ∈ U c . Lemma 2.9.
There is a Turing functional Θ such that, for a -random X and an upper bound b on the -randomness deficiency of X , we have Θ X ⊕∅ ′ ,b = X ′ .Proof. Let V e = { Z : e ∈ Z ′ } . The V e are uniformly Σ classes, so we candefine a function f T ∅ ′ such that µ ( V e \ V e [ f ( e, i )]) < − i for all e and i .Then each sequence V e \ V e [ f ( e, , V e \ V e [ f ( e, , . . . is an ∅ ′ -Martin L¨of OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 7 test, and from b we can compute a number m such that if X is 2-randomand b bounds the 2-randomness deficiency of X , then X / ∈ V e \ V e [ f ( e, m )].Then X ∈ V e if and only if X ∈ V e [ f ( e, m )], which we can verify ( X ⊕ ∅ ′ )-computably. (cid:3) Proposition 2.10. If X is -random, then X is not jump-autoreducible.Proof. Because X is 2-random, it is not autoreducible relative to ∅ ′ , ascan be seen by relativizing the proof of Figueira, Miller, and Nies [8] thatno 1-random set is autoreducible. To obtain a contradiction, assume that X is jump-autoreducible through some functional Φ. It can be directlyverified that there is a computable function f such that f ( n ) bounds therandomness deficiency of X \ { n } . Now let Ψ Y ⊕∅ ′ ( n ) = Φ Θ Y ⊕∅′ ,f ( n ) ( n ).Then X is autoreducible relative to ∅ ′ through Ψ, a contradiction. (cid:3) Corollary 2.11. If X is -random, then E ( X ) nc I ( X ) but E ( X ) (cid:10) uc I ( X ) . Although we will not discuss generic reducibility after this section, it isworth noting that our maps E and I also allow us to distinguish genericreducibility from its nonuniform analog. Let us briefly review the relevantdefinitions from [13]. A generic description of a set A is a partial functionthat agrees with A where defined, and whose domain has density 1. Aset A is generically reducible to a set B , written A g B , if there is anenumeration operator W such that if Φ is a generic description of B , then W graph(Φ) is the graph of a generic description of A . We can define thenotion of nonuniform generic reducibility in a similar way: A ng B if forevery generic description Φ of B , there is a generic description Ψ of A suchthat graph(Ψ) is enumeration reducible to graph(Φ).It is easy to see that E ( X ) ng I ( X ) for all X . On the other hand, wehave the following fact. Proposition 2.12. If E ( X ) g I ( X ) then X is autoreducible.Proof. Let I n be as in the definition of I . Suppose that W witnesses that E ( X ) g I ( X ). We can assume that W Z is the graph of a partial functionfor every oracle Z . Define a Turing functional Θ as follows. Given anoracle Y and an input n , let Φ( k ) = Y ( m ) if k ∈ I m and m = n , andlet Φ( k ) ↑ if k ∈ I n . Let Ψ be the partial function with graph W graph(Φ) .Search for an i and a k ∈ I h n,i i such that Ψ( k ) ↓ . If such numbers are foundthen let Θ Y ( n ) = Ψ( k ). If Y = X \ { n } then Φ is a generic description of I ( X ), so Ψ is a generic description of E ( X ), and hence Θ Y ( n ) ↓ = X ( n ).Thus X is autoreducible. (cid:3) We finish this section by showing that, for both the uniform and thenonuniform coarse degrees, coarsenings of the appropriate type preservejoins but do not always preserve existing meets.
D. R. HIRSCHFELDT, C. G. JOCKUSCH, JR., R. KUYPER, AND P. E. SCHUPP
Proposition 2.13.
Let C be a coarsening. Then C ( A ⊕ B ) is the leastupper bound of C ( A ) and C ( B ) in the nonuniform coarse degrees. Thesame holds for the uniform coarse degrees if C is a uniform coarsening.Proof. By Proposition 2.2 we know that C ( A ⊕ B ) is an upper bound for C ( A ) and C ( B ) in both the uniform and nonuniform coarse degrees. Letus show that it is the least upper bound. If C ( A ) , C ( B ) nc G then everycoarse description D of G computes both A and B , so D > T A ⊕ B > T C ( A ⊕ B ). Thus G > nc C ( A ⊕ B ).Finally, assume that C is a uniform coarsening and let C ( A ) , C ( B ) uc G .Let Φ be a Turing functional such that Φ A ⊕ B = C ( A ⊕ B ). Every coarsedescription H of G uniformly computes coarse descriptions D of C ( A )and D of C ( B ). Since C is uniform, there are Turing functionals Γ and∆ such that, letting A s ( n ) = Γ D ( n, s ) and B s ( n ) = Γ D ( n, s ), we havethat A ⊕ B = A s ⊕ B s for all sufficiently large s . Let E be defined asfollows. Given n , search for an s > n such that Φ A s ⊕ B s ( n ) ↓ , and let E ( n ) = Φ A s ⊕ B s ( n ). If n is sufficiently large, then E ( n ) = Φ A ⊕ B ( n ) = C ( A ⊕ B )( n ), so E is a coarse description of C ( A ⊕ B ). Since E is obtaineduniformly from H , we have that C ( A ⊕ B ) uc G . (cid:3) Lemma 2.14.
Let C be a uniform coarsening and let Y T X . Then Y uc C ( X ) .Proof. Let Φ be a Turing functional such that Φ X = Y . Let D be a coarsedescription of C ( X ) and let A s be as in Definition 2.1. Now define G ( n )to be the value of Φ A s ( n ) for the least pair h s, t i such that s > n andΦ A s ( n )[ t ] ↓ . Then G = ∗ Y , so G is a coarse description of Y . (cid:3) Proposition 2.15.
Let C be a coarsening. Then C does not always pre-serve existing meets in the nonuniform coarse degrees. The same holds forthe uniform coarse degrees if C is a uniform coarsening.Proof. Let
X, Y be relatively 2-random and ∆ . Then X and Y form aminimal pair in the Turing degrees, while X and Y do not form a minimalpair in the nonuniform coarse degrees by Theorem 5.6 below. Since everycoarse description of C ( X ) computes X we see that C ( X ) > nc X and C ( Y ) > nc Y . Therefore C ( X ) and C ( Y ) also do not form a minimal pairin the nonuniform coarse degrees.Next, let C be a uniform coarsening. We have seen above that thereexists some A nc C ( X ) , C ( Y ) that is not coarsely computable. Then A T X, Y , so A uc C ( X ) , C ( Y ) by the previous lemma. Thus, C ( X ) and C ( Y ) do not form a minimal pair in the uniform coarse degrees. (cid:3) Randomness, K -triviality, and robust information coding It is reasonable to expect that the embeddings induced by E (or equiva-lently, by any uniform coarsening) are not surjective. Indeed, if E ( A ) uc OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 9 X then the information represented by A is coded into X in a fairly re-dundant way. If A is noncomputable, it should follow that X cannot berandom. As we will see, we can make this intuition precise. Definition 3.1.
Let X c be the set of all A such that A is computablefrom every coarse description of X .We will show that if X is weakly 2-random then X c = , and hence E ( A ) (cid:10) nc X for all noncomputable A (since every coarse description of E ( A ) computes A ). Since no 1-random set can be coarsely computable, itwill follow that X nc E ( B ) and X uc E ( B ) for all B . We will first provethe following theorem. Let K be the class of K -trivial sets. (See [3] or [19]for more on K -triviality.) Theorem 3.2. If X is -random then X c ⊆ K . By Downey, Nies, Weber, and Yu [6], if X is weakly 2-random then itcannot compute any noncomputable ∆ sets. Since K ⊂ ∆ , our desiredresult follows from Theorem 3.2. Corollary 3.3. If X is weakly -random then X c = , and hence E ( A ) (cid:10) nc X for all noncomputable A . In particular, in both the uniform and nonuni-form coarse degrees, the degree of X is not in the image of the embeddinginduced by E . To prove Theorem 3.2, we use the fact, established by Hirschfeldt, Nies,and Stephan [11], that A is K -trivial if and only if A is a base for 1-randomness, that is, A is computable in a set that is 1-random relative to A . The basic idea is to show that if X is 1-random and A ∈ X c , then foreach k > X into k many “slices” X , . . . , X k − such that for each i < k , we have A T X ⊕ · · · ⊕ X i − ⊕ X i +1 ⊕ · · · ⊕ X k − (where the right hand side of this inequality denotes X ⊕ · · · ⊕ X k − when i = 0 and X ⊕ · · · ⊕ X k − when i = k − X i is 1-random relative to X ⊕ · · · ⊕ X i − ⊕ X i +1 ⊕ · · · ⊕ X k − ⊕ A , and hence,again by van Lambalgen’s Theorem, that X is 1-random relative to A .Since A ∈ X c implies that A T X , we will conclude that A is a base for1-randomness, and hence is K -trivial. We begin with some notation forcertain partitions of X . Definition 3.4.
Let X ⊆ ω . For an infinite subset Z = { z < z < · · · } of ω , let X ↾ Z = { n : z n ∈ X } . For k > i < k , define X ki = X ↾ { n : n ≡ i mod k } and X k = i = X ↾ { n : n i mod k } . Note that X k = i ≡ T X \ { n : n ≡ i mod k } and ρ ( X △ ( X \ { n : n ≡ i mod k } )) k . Van Lambalgen’s Theorem [23] states that Y ⊕ Z is 1-random if andonly if Y and Z are relatively 1-random. The proof of this theorem shows,more generally, that if Z is computable, infinite, and coinfinite, then X is 1-random if and only if X ↾ Z and X ↾ Z are relatively 1-random.Relativizing this fact and applying induction, we get the following versionof van Lambalgen’s Theorem. Theorem 3.5 (van Lambalgen [23]) . The following are equivalent for allsets X and A , and all k > . X is -random relative to A . For each i < k , the set X ki is -random relative to X k = i ⊕ A . The last ingredient we need for the proof of Theorem 3.2 is a kind ofcompactness principle, which will also be used to yield further results inthe next section, and is of independent interest given its connection withthe following concept defined in [10].
Definition 3.6.
Let r ∈ [0 , X is coarsely computable at density r if there is a computable set C such that ρ ( X △ C ) − r . The coarsecomputability bound of X is γ ( X ) = sup { r : X is coarsely computable at density r } . As noted in [10], there are sets X such that γ ( X ) = 1 but X is notcoarsely computable. In other words, there is no principle of “compactnessof computable coarse descriptions”. (Although Miller (see [10, Theorem5.8]) showed that one can in fact recover such a principle by adding afurther effectivity condition to the requirement that γ ( X ) = 1.) The fol-lowing theorem shows that if we replace “computable” by “cone-avoiding”,the situation is different. Theorem 3.7.
Let A and X be arbitrary sets. Suppose that for each ε > there is a set D ε such that ρ ( X △ D ε ) ε and A (cid:10) T D ε . Then there is acoarse description D of X such that A (cid:10) T D .Proof. The basic idea is that, given a Turing functional Φ and a string σ that is “close to” X , we can extend σ to a string τ that is “close to” X suchthat Φ D = A for all D extending τ that are “close to” X . We can take τ to be any string “close to” X such that, for some n , either Φ τ ( n ) ↓ 6 = A ( n )or Φ γ ( n ) ↑ for all γ extending τ that are “close to” X . If no such τ exists,we can obtain a contradiction by arguing that A T D ε for sufficientlysmall ε , since with an oracle for D ε we have access to many strings thatare “close to” D ε and hence to X , by the triangle inequality for Hammingdistance. In the above discussion the meaning of “close to” is different indifferent contexts, but the precise version will be given below. Further, asthe construction proceeds, the meaning of “close to” becomes so stringentthat we guarantee that ρ ( X △ D ) = 0. We now specify the formal details. OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 11
We obtain D as S e σ e , where σ e ∈ <ω and σ ( σ ( · · · . In orderto ensure that ρ ( X △ D ) = 0, we require that for all e and all m in theinterval [ | σ e | , | σ e +1 | ], either D and X agree on the interval [ | σ e | , m ) or ρ m ( X △ D ) −| σ e | , with the latter true for m = | σ e +1 | . This conditionimplies that ρ m ( X △ D ) −| σ e | for all m ∈ [ | σ e +1 | , | σ e +2 | ], and hence that ρ ( X △ D ) = 0.Let σ and τ be strings and let ε be a positive real number. Call τ an ε - good extension of σ if τ properly extends σ and for all m ∈ [ | σ | , | τ | ],either X and τ agree on [ | σ | , m ) or ρ m ( τ △ X ) ε , with the latter true for m = | τ | . In line with the previous paragraph, we require that σ e +1 be a2 −| σ e | -good extension of σ e for all e .At stage 0, let σ be the empty string. At stage e + 1, we are given σ e and choose σ e +1 as follows so as to force that A = Φ De . Let ε = 2 −| σ e | . Case 1 . There is a number n and a string τ that is an ε -good extensionof σ e such that Φ τe ( n ) ↓ 6 = A ( n ). Let σ e +1 be such a τ . Case 2 . Case 1 does not hold and there is a number n and a string β that is an ε -good extension of σ e such that | β | > | σ e | + 2 and Φ τe ( n ) ↑ forall ε -good extensions τ of β . Let σ e +1 be such a β .We claim that either Case 1 or Case 2 applies. Suppose not. Let D ε beas in the hypothesis of the lemma, so that ρ ( X △ D ε ) ε and A (cid:10) T D ε .Let c > | σ e | + 2 be sufficiently large so that ρ m ( X △ D ε ) ε for all m > c and σ e has an ε -good extension β of length c . Note that the stringobtained from σ e by appending a sufficiently long segment of X startingwith X ( | σ e | ) is an ε -good extension of σ e , so such a β exists, and weassume it is obtained in this manner.We now obtain a contradiction by showing that A T D ε . To calculate A ( n ) search for a string γ extending β such that Φ γe ( n ) ↓ , say with use u ,and ρ m ( D ε △ γ ) ε for all m ∈ [ c, u ). We first check that such a string γ exists. Since Case 2 does not hold, there is a string τ that is an ε -goodextension of β such that Φ τe ( n ) ↓ . We claim that τ meets the criteria toserve as γ . We need only check that ρ m ( D ε △ τ ) ε for all m ∈ [ c, u ). Fix m ∈ [ c, u ). Then ρ m ( D ε △ τ ) ρ m ( D ε △ X ) + ρ m ( X △ τ ) ε ε ε . Next we claim that γ is an ε -good extension of σ e . The string γ extends σ e since it extends β , and β extends σ e . Let m ∈ [ | σ e , | γ | ] be given. If m < c , then γ and X agree on the interval [ | σ e | , m ) because β and X agreeon this interval and γ extends β . Now suppose that m > c . Then ρ m ( γ △ X ) ρ m ( γ △ D ε ) + ρ m ( D ε △ X ) ε ε < ε. Since γ is an ε -good extension of σ e for which Φ γe ( n ) ↓ , and Case 1 doesnot hold, we conclude that Φ γe ( n ) = A ( n ). The search for γ can be carriedout computably in D ε , so we conclude that A T D ε , contradicting our choice of D ε . (Although β cannot be computed from D ε , we may use it inour computation of A ( n ) since it is a fixed string which does not dependon n .) This contradiction shows that Case 1 or Case 2 must apply.Let D = S n σ n . Then ρ ( D △ X ) = 0, and A (cid:10) T D since Case 1 or Case2 applies at every stage. (cid:3) Proof of Theorem 3.2.
Let A ∈ X c . By Theorem 3.7, there is an ε > A T D ε whenever ρ ( X △ D ε ) ε . Let k be an integer such that k > ε . As noted in Definition 3.4, X k = i is Turing equivalent to such a D ε foreach i < k , so we have A T X k = i for all i < k . By the unrelativized formof Theorem 3.5, each X ki is 1-random relative to X k = i , and hence relativeto X k = i ⊕ A ≡ T X k = i . Again by Theorem 3.5, X is 1-random relative to A .But A T X , so A is a base for 1-randomness, and hence is K -trivial. (cid:3) Weak 2-randomness is exactly the level of randomness necessary to ob-tain Corollary 3.3 directly from Theorem 3.2, because, as shown in [6], if a1-random set is not weakly 2-random, then it computes a noncomputablec.e. set. The corollary itself does hold of some 1-random sets that are notweakly 2-random, because if it holds of X then it also holds of any Y such that ρ ( Y △ X ) = 0. (For example, let X be 2-random and let Y beobtained from X by letting Y (2 n ) = Ω( n ) (where Ω is Chaitin’s haltingprobability) for all n and letting Y ( k ) = X ( k ) for all other k . By vanLambalgen’s Theorem, Y is 1-random, but it computes Ω, and hence isnot weakly 2-random.)Nevertheless, Corollary 3.3 does not hold of all 1-random sets, as wenow show. Definition 3.8.
Let W , W , . . . be an effective listing of the c.e. sets.A set A is promptly simple if it is c.e. and coinfinite, and there exist acomputable function f and a computable enumeration A [0] , A [1] , . . . of A such that for each e , if W e is infinite then there are n and s for which n ∈ W e [ s ] \ W e [ s −
1] and n ∈ A [ f ( s )]. Note that every promptly simpleset is noncomputable.We will show that if X T ∅ ′ is 1-random then X c contains a promptlysimple set, and there is a promptly simple set A such that E ( A ) nc X .(We do not know whether we can improve the last statement to E ( A ) uc X .) In fact, we will obtain a considerably stronger result by first provinga generalization of the fact, due to Hirschfeldt and Miller (see [3, Theorem7.2.11]), that if T is a Σ class of measure 0, then there is a noncomputablec.e. set that is computable from each 1-random element of T .For a binary relation P ( Y, Z ) between elements of 2 ω , let P ( Y ) = { Z : P ( Y, Z ) } . Theorem 3.9.
Let S , S , . . . be uniformly Π classes of measure , andlet P ( Y, Z ) , P ( Y, Z ) , . . . be uniformly Π relations. Let D be the class of OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 13 all Y for which there are numbers k, m and a -random set Z such that Z ∈ P k ( Y ) ⊆ S m . Then there is a promptly simple set A such that A T Y for every Y ∈ D .Proof. Let ( V mn ) m,n ∈ ω be uniformly Σ classes such that S m = T n V mn .We may assume that V m ⊇ V m ⊇ · · · for all m . For each m , we have µ ( T n V nm ) = µ ( S m ) = 0, so lim n µ ( V mn ) = 0 for each m . Let Θ be acomputable relation such that P k ( Y, Z ) ≡ ∀ l Θ( k, Y ↾ l, Z ↾ l ).Define A as follows. At each stage s , if there is an e < s such that nonumbers have entered A for the sake of e yet, and an n > e such that n ∈ W e [ s ] \ W e [ s −
1] and µ ( V mn [ s ]) − e for all m < e , then for the leastsuch e , put the least corresponding n into A . We say that n enters A forthe sake of e .Clearly, A is c.e. and coinfinite, since at most e many numbers less than2 e ever enter A . Suppose that W e is infinite. Let t > e be a stage suchthat all numbers that will ever enter A for the sake of any i < e are in A [ t ].There must be an s > t and an n > e such that n ∈ W e [ s ] \ W e [ s − µ ( V mn [ s ]) − e for all m < e . Then the least such n enters A for thesake of e at stage s unless another number has already entered A for thesake of e . It follows that A is promptly simple.Now suppose that Y ∈ D . Let the numbers k, m and the 1-random set Z be such that Z ∈ P k ( Y ) ⊆ S m . Let B T Y be defined as follows.Given n , let D ns = { X : ( ∀ l s ) Θ( k, Y ↾ l, X ↾ l ) } \ V mn [ s ] . Then D n ⊇ D n ⊇ · · · . Furthermore, if X ∈ T s D ns then P k ( Y, X ) and
X / ∈ V mn . Since P k ( Y ) ⊆ S m ⊆ V mn , it follows that X / ∈ P k ( Y ), which isa contradiction. Thus T s D ns = ∅ . Since the D ns are nested closed sets, itfollows that there is an s such that D ns = ∅ . Let s n be the least such s (which we can find using Y ) and let B ( n ) = A ( n )[ s n ]. Note that B ⊆ A .Let T = {V mn [ s ] : n enters A at stage s } . We can think of T as a uniformsingly-indexed sequence of Σ sets since m is fixed and for each n thereis at most one s such that V mn [ s ] ∈ T . For each e , there is at most one n that enters A for the sake of e , and the sum of the measures of the V mn [ s ]such that n enters A at stage s for the sake of some e > m is bounded by P e − e , which is finite. Thus T is a Solovay test, and hence Z is in onlyfinitely many elements of T . So for all but finitely many n , if n enters A at stage s then Z / ∈ V mn [ s ]. Then Z ∈ D ns , so s n > s . Hence, for all such n ,we have that B ( n ) = A ( n )[ s n ] = 1. Thus B = ∗ A , so A ≡ T B T Y . (cid:3) Note that the result of Hirschfeldt and Miller mentioned above followsfrom this theorem by starting with a Σ class S = T m S m of measure 0and letting each P k be the identity relation. Corollary 3.10.
Let X T ∅ ′ be -random. There is a promptly simpleset A such that if ρ ( D △ X ) < then A T D . In particular, X c contains a promptly simple set, and there is a promptly simple set A such that E ( A ) nc X .Proof. Say that sets Y and Z are r -close from m on if whenever m < n ,the Hamming distance between Y ↾ n and Z ↾ n (i.e., the number of bitson which these two strings differ) is at most rn .Let S m be the class of all Z such that X and Z are -close from m on.Since X is ∆ , the S m are uniformly Π classes. Furthermore, if X and Z are -close from m on for some m , then Z cannot be 1-random relativeto X (by the same argument that shows that if C is 1-random then theremust be infinitely many n such that C ↾ n has more 1’s than 0’s), so µ ( S m ) = 0 for all m . Let P m ( Y, Z ) hold if and only if Y and Z are -closefrom m on. The P m are clearly uniformly Π relations.Thus the hypotheses of Theorem 3.9 are satisfied. Let A be as in thattheorem. Suppose that ρ ( D △ X ) < . Then there is an m such that D and X are -close from m on. If D and Z are -close from m on, then bythe triangle inequality for Hamming distance, X and Z are -close from m on. Thus X ∈ P m ( D ) ⊆ S m , so A T D . (cid:3) After learning about Corollary 3.10, Nies [20] gave a different but closelyconnected proof of this result, which works even for X of positive effec-tive Hausdorff dimension, as long as we sufficiently decrease the bound .However, even for X of effective Hausdorff dimension 1 his bound is muchworse, namely .Maass, Shore, and Stob [17, Corollary 1.6] showed that if A and B arepromptly simple then there is a promptly simple set G such that G T A and G T B . Thus we have the following extension of Kuˇcera’s result [15]that two ∆ Corollary 3.11.
Let X , X T ∅ ′ be -random. There is a promptlysimple set A such that if ρ ( D △ X i ) < for some i ∈ { , } then A T D . It is easy to adapt the proof of Corollary 3.10 to give a direct proof ofCorollary 3.11, and indeed of the fact that for any uniformly ∅ ′ -computablefamily X , X , . . . of 1-random sets, there is a promptly simple set A suchthat if ρ ( D △ X i ) < for some i then A T D . (We let S h i,m i be the classof all Z such that X i and Z are -close from m on, and the rest of theproof is essentially as before.)Given the many (and often surprising) characterizations of K -triviality,it is natural to ask whether there is a converse to Theorem 3.2 statingthat if A is K -trivial then A ∈ X c for some 1-random X . We now showthat is not the case, using a recent result of Bienvenu, Greenberg, Kuˇcera,Nies, and Turetsky [2]. There are many notions of randomness tests in thetheory of algorithmic randomness. Some, like Martin-L¨of tests, correspondto significant levels of algorithmic randomness, while other, less obviously OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 15 natural ones have nevertheless become important tools in the developmentof this theory. Balanced tests belong to the latter class.
Definition 3.12.
Let W , W , . . . ⊆ ω be an effective list of all Σ classes.A balanced test is a sequence ( U n ) n ∈ ω of Σ classes such that there is acomputable binary function f with the following properties.1. |{ s : f ( n, s + 1) = f ( n, s ) }| O (2 n ),2. ∀ n U n = W lim s f ( n,s ) , and3. ∀ n ∀ s µ ( W f ( n,s ) ) − n .For σ ∈ <ω and X ∈ ω , we write σX for the element of 2 ω obtainedby concatenating σ and X . Theorem 3.13 (Bienvenu, Greenberg, Kuˇcera, Nies, and Turetsky [2]) . There are a K -trivial set A and a balanced test ( U n ) n ∈ ω such that if A T X then there is a string σ with σX ∈ T n U n . We will also use the following measure-theoretic fact.
Theorem 3.14 (Loomis and Whitney [16]) . Let
S ⊆ ω be open, andlet k ∈ ω . For i < k , let π i ( S ) = { Y k = i : Y ∈ S} . Then µ ( S ) k − µ ( π ( S )) · · · µ ( π k − ( S )) . Our result will follow from the following lemma.
Lemma 3.15.
Let X be -random, let k > , and let ( U n ) n ∈ ω be a balancedtest. There is an i < k such that X k = i / ∈ T n U n .Proof. Assume for a contradiction that X k = i ∈ T n U n for all i < k . Let S n,s = { Y : ∀ i < k ( Y k = i ∈ U n [ s ]) } and let S n = S s S n,s . By Theorem 3.14, µ ( S n,s ) k − µ ( U n [ s ]) k , so µ ( S n ) O (2 n )2 − nkk − = O (2 − nk − ), and hence P n µ ( S n ) < ∞ . Thus {S n : n ∈ ω } is a Solovay test. However, X ∈ T n S n , so we have acontradiction. (cid:3) Theorem 3.16.
There is a K -trivial set A such that A / ∈ X c for all -random X .Proof. Let A and ( U n ) n ∈ ω be as in Theorem 3.13. Let X be 1-random. ByTheorem 3.7, it is enough to fix k > i < k such that A (cid:10) T X k = i . Assume for a contradiction that A T X k = i for all i < k . Then there are σ , . . . , σ k − such that σ i X k = i ∈ T n U n for all i < k .Let m = max i We can use Theorem 3.7 to give an analog to Corollary 3.3 for effectivegenericity. In this case, 1-genericity is sufficient, as it is straightforwardto show that if X is 1-generic relative to A and A is noncomputable, then A (cid:10) T X (i.e., unlike the case for 1-randomness, there are no noncom-putable bases for 1-genericity), and that no 1-generic set can be coarselycomputable. The other ingredient we need to replicate the argument wegave in the case of effective randomness is a version of van Lambalgen’sTheorem for 1-genericity. This result was established by Yu [24, Proposi-tion 2.2]. Relativizing his theorem and applying induction as in the caseof Theorem 3.5, we obtain the following fact. Theorem 4.1 (Yu [24]) . The following are equivalent for all sets X and A , and all k > . X is -generic relative to A . For each i < k , the set X ki is -generic relative to X k = i ⊕ A . Now we can establish the following analog to Corollary 3.3. Theorem 4.2. If X is -generic then X c = , and hence E ( A ) (cid:10) nc X forall noncomputable A . In particular, in both the uniform and nonuniformcoarse degrees, the degree of X is not in the image of the embedding inducedby E .Proof. Let A ∈ X c . As in the proof of Theorem 3.2, there is a k such that A T X k = i for all i < k . By the unrelativized form of Theorem 4.1, each X ki is 1-generic relative to X k = i , and hence relative to X k = i ⊕ A ≡ T X k = i .Again by Theorem 4.1, X is 1-generic relative to A . But A T X , so A iscomputable. (cid:3) Igusa (personal communication) has also found the following applicationof Theorem 3.7. We say that X is generically computable if there is apartial computable function ϕ such that ϕ ( n ) = X ( n ) for all n in thedomain of ϕ , and the domain of ϕ has density 1. Jockusch and Schupp[13, Theorem 2.26] showed that there are generically computable sets thatare not coarsely computable, but by Lemma 1.7 in [10], if X is genericallycomputable then γ ( X ) = 1, where γ is the coarse computability boundfrom Definition 3.6. Theorem 4.3 (Igusa, personal communication) . If γ ( X ) = 1 then X c = , and hence E ( A ) (cid:10) nc X for all noncomputable A . Thus, if γ ( X ) = 1 and X is not coarsely computable then in both the uniform and nonuniformcoarse degrees, the degree of X is not in the image of the embedding inducedby E . In particular, the above holds when X is generically computable butnot coarsely computable. OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 17 Proof. Suppose that γ ( X ) = 1 and A is not computable. If ε > C such that ρ ( X △ C ) < ε . Since C is com-putable, A (cid:10) T C . By Theorem 3.7, A / ∈ X c . (cid:3) Minimal pairs in the uniform and nonuniformcoarse degrees For any degree structure that acts as a measure of information content,it is reasonable to expect that if two sets are sufficiently random relativeto each other, then their degrees form a minimal pair. For the Turingdegrees, it is not difficult to show that if Y is not computable and X is weakly 2-random relative to Y , then the degrees of X and Y form aminimal pair. On the other hand, Kuˇcera [15] showed that if X, Y T ∅ ′ are both 1-random, then there is a noncomputable set A T X, Y , so thereare relatively 1-random sets whose degrees do not form a minimal pair.As we will see, the situation for the nonuniform coarse degrees is similar,but “one jump up”.For an interval I , let ρ I ( X ) = | X ∩ I || I | . Lemma 5.1. Let J k = [2 k − , k +1 − . Then ρ ( X ) = 0 if and only if lim k ρ J k ( X ) = 0 .Proof. First suppose that lim sup k ρ J k ( X ) > 0. Since | J k | = 2 k , we have ρ ( X ) > lim sup k ρ k +1 − ( X ) > lim sup k ρ Jk ( X )2 > k ρ J k ( X ) = 0. Fix ε > 0. If m is sufficientlylarge, k > m , and n ∈ J k , then | X ∩ [0 , n ) | | X ∩ [0 , k +1 − | m − X i =0 | J i | + k X i = m ε | J i | . If k is sufficiently large then this sum is less than ε (2 k − ρ n ( X ) < ε (2 k − n εnn = ε . Thus lim sup n ρ n ( X ) ε . Since ε is arbi-trary, lim sup n ρ n ( X ) = 0. (cid:3) Theorem 5.2. If A is not coarsely computable and X is weakly -randomrelative to A , then there is no X -computable coarse description of A . Inparticular, A (cid:10) nc X .Proof. Suppose that Φ X is a coarse description of A and let P = { Y : Φ Y is a coarse description of A } . Then Y ∈ P if and only if1. Φ Y is total, which is a Π property, and2. for each k there is an m such that, for all n > m , we have ρ n (Φ Y △ A ) < − k , which is a Π ,A property. Thus P is a Π ,A class, so it suffices to show that if A is not coarselycomputable then µ ( P ) = 0.We prove the contrapositive. Suppose that µ ( P ) > 0. Then, by theLebesgue Density Theorem, there is a σ such that µ ( P ∩ J σ K ) > −| σ | .It is now easy to define a Turing functional Ψ such that the measure ofthe class of Y for which Ψ Y is a coarse description of A is greater than . Define a computable set D as follows. Let J k = [2 k − , k +1 − k , wait until we find a finite set of strings S k such that µ ( J S k K ) > and Ψ σ converges on all of J k for each σ ∈ S k (which must happen, byour choice of Ψ). Let n k be largest such that there is a set R k ⊆ S k with µ ( J R k K ) > and ρ J k (Ψ σ △ Ψ τ ) − n k for all σ, τ ∈ R k . Let σ ∈ R k anddefine D ↾ J k = Ψ σ ↾ J k .We claim that D is a coarse description of A . By Lemma 5.1, it isenough to show that lim k ρ J k ( D △ A ) = 0. Fix n . Let B k be the class ofall Y such that Ψ Y converges on all of J k and ρ J k (Ψ Y △ A ) − n . If Ψ Y is a coarse description of A then, again by Lemma 5.1, ρ J k (Ψ Y △ A ) − n for all sufficiently large k , so there is an m such that µ ( B k ) > for each k > m , and hence µ ( B k ∩ J S k K ) > for each k > m . Let T k = { σ ∈ S k : ρ J k (Ψ σ △ A ) − n } . Then J T k K = B k ∩ J S k K , so µ ( J T k K ) > for each k > m . Furthermore, by the triangle inequality for Hamming distance, ρ J k (Ψ σ △ Ψ τ ) − ( n − for all σ, τ ∈ T k . It follows that, for each k > m ,we have n k > n − 1, and at least one element Y of B k is in J R k K (where R k is as in the definition of D ), which implies that ρ J k ( D △ A ) ρ J k ( D △ Ψ Y ) + ρ J k (Ψ Y △ A ) − n k + 2 − n < − n +2 . Since n is arbitrary, lim k ρ J k ( D △ A ) = 0. (cid:3) Corollary 5.3. If Y is not coarsely computable and X is weakly -randomrelative to Y , then the nonuniform coarse degrees of X and Y form aminimal pair, and hence so do their uniform coarse degrees.Proof. Let A nc X, Y . Then Y computes a coarse description D of A .We have D nc X , and X is weakly 3-random relative to D , so by thetheorem, D is coarsely computable, and hence so is A . (cid:3) For the nonuniform coarse degrees at least, this corollary does not holdof 2-randomness in place of weak 3-randomness. To establish this fact, weuse the following complementary results. The first was proved by Downey,Jockusch, and Schupp [4, Corollary 3.16] in unrelativized form, but it iseasy to check that their proof relativizes. Theorem 5.4 (Downey, Jockusch, and Schupp [4]) . If A is c.e., ρ ( A ) isdefined, and A ′ T D ′ , then D computes a coarse description of A . Theorem 5.5 (Hirschfeldt, Jockusch, McNicholl, and Schupp [10]) . Everynonlow c.e. degree contains a c.e. set A such that ρ ( A ) = and A is notcoarsely computable. OARSE REDUCIBILITY AND ALGORITHMIC RANDOMNESS 19 Theorem 5.6. Let X, Y T ∅ ′′ (which is equivalent to E ( X ) , E ( Y ) nc E ( ∅ ′′ ) ). If X and Y are both -random, then there is an A nc X, Y such that A is not coarsely computable. In particular, there is a pair ofrelatively -random sets whose nonuniform coarse degrees do not form aminimal pair.Proof. Since X and Y are both 1-random relative to ∅ ′ , by the relativizedform of Corollary 3.11 there is an ∅ ′ -c.e. set J > T ∅ ′ such that for everycoarse description D of either X or Y , we have that D ⊕ ∅ ′ computes J ,and hence so does D ′ . By the Sacks Jump Inversion Theorem [21], there isa c.e. set B such that B ′ ≡ T J . By Theorem 5.5, there is a c.e. set A ≡ T B such that ρ ( A ) = and A is not coarsely computable. Let D be a coarsedescription of either X or Y . Then D ′ > T J ≡ T A ′ , so by Theorem 5.4, D computes a coarse description of A . (cid:3) We do not know whether this theorem holds for uniform coarse reducibil-ity. 6. Open Questions We finish with a few questions raised by our results. Open Question 6.1. Can the bound in Corollary 3.10 be increased? Open Question 6.2. Let X T ∅ ′ be 1-random. Must there be a non-computable (c.e.) set A such that E ( A ) uc X ? (Recall that Corollary3.10 gives a positive answer to the nonuniform analog to this question.) Ifnot, then is there any 1-random X for which such an A exists? Open Question 6.3. Does Theorem 5.6 hold for uniform coarse reducibil-ity? References [1] E. Astor, Asymptotic density, immunity, and randomness, to appear.[2] L. Bienvenu, N. Greenberg, A. Kuˇcera, A. Nies, and D. Turetsky, Coherent ran-domness tests and computing the K-trivial sets, to appear[3] R. G. Downey and D. R. Hirschfeldt, Algorithmic Randomness and Complexity,Theory and Applications of Computability, Springer, New York, 2010.[4] R. G. Downey, C. G. Jockusch, Jr., and P. E. Schupp, Asymptotic density andcomputably enumerable sets, J. Math. Log 13 (2013) 1350005, 43 pp.[5] R. G. Downey, C. G. Jockusch. Jr, T. H. McNicholl, and P. E. Schupp, Asymptoticdensity and the Ershov Hierarchy, Math. Log. Q., to appear.[6] R. G. Downey, A. Nies, R. Weber, and L. Yu, Lowness and Π nullsets, J. SymbolicLogic 71 (2006) 1044–1052.[7] D. D. Dzhafarov and G. Igusa, Notions of robust information coding, to appear.[8] S. Figueira, J. S. Miller, and A. Nies, Indifferent sets, J. Logic Comput. 19 (2009)425–443.[9] D. R. Hirschfeldt and C. G. Jockusch, Jr., On notions of computability theoreticreduction between Π principles, to appear. [10] D. R. Hirschfeldt, C. G. Jockusch, Jr., T. McNicholl, and P. E. Schupp, Asymptoticdensity and the coarse computability bound, to appear.[11] D. R. Hirschfeldt, A. Nies, and F. Stephan, Using random sets as oracles, J. LondonMath. Soc. 75 (2007) 610–622.[12] C. G. Jockusch, Jr., Degrees of generic sets, in F. R. Drake and S. S. Wainer,eds., Recursion Theory: Its Generalisations and Applications, London Math. Soc.Lecture Note Series 45, Cambridge University Press, Cambridge, 1980, 110–139.[13] C. G. Jockusch, Jr. and P. E. Schupp, Generic computability, Turing degrees, andasymptotic density, J. London Math. Soc. 85 (2012) 472–490.[14] I. Kapovich, A. Myasnikov, P. Schupp, and V. Shpilrain, Generic-case complexity,decision problems in group theory and random walks, J. Algebra 264 (2003) 665–694.[15] A. Kuˇcera, An alternative priority-free solution to Post’s problem, in J. Gruska, B.Rovan, and J. Wiederman, eds., Mathematical Foundations of Computer Science1986, Lecture Notes in Comput. Sci. 233, Springer, Berlin, 1986, 493–500.[16] L. H. Loomis and H. Whitney, An inequality related to the isoperimetric inequality,Bull. Amer. Math. Soc 55 (1949) 961–962.[17] W. Maass, R. A. Shore, and M. Stob, Splitting properties and jump classes, IsraelJ. Math. 39 (1981) 210–224.[18] B. Monin, Higher Computability and Randomness, PhD dissertation, Universit´eParis Diderot–Paris 7, 2014.[19] A. Nies, Computability and Randomness, Oxford University Press, Oxford, 2009.[20] A. Nies, Notes on a theorem of Hirschfeldt, Jockusch, Kuyper and Schupp regard-ing coarse computation and K -triviality, in A. Nies, ed., Logic Blog 2013, Part 5,Section 24, available at http://arxiv.org/abs/1403.5719.[21] G. E. Sacks, Recursive enumerability and the jump operator, Trans. Amer. Math.Soc. 108 (1963) 223–239.[22] R. I. Soare, Recursively Enumerable Sets and Degrees, Perspectives in Mathemat-ical Logic, Springer-Verlag, Berlin, 1987.[23] M. van Lambalgen, The axiomatization of randomness, J. Symbolic Logic 55 (1990)1143–1167.[24] L. Yu, Lowness for genericity, Arch. Math. Logic 45 (2006) 233–238. Department of Mathematics, University of Chicago E-mail address : [email protected] Department of Mathematics, University of Illinois at Urbana-Cham-paign E-mail address : [email protected] Department of Mathematics, Radboud University Nijmegen E-mail address : [email protected] Department of Mathematics, University of Illinois at Urbana-Cham-paign E-mail address ::