Key developments in algorithmic randomness
aa r X i v : . [ m a t h . L O ] A p r KEY DEVELOPMENTS IN ALGORITHMICRANDOMNESS
JOHANNA N.Y. FRANKLIN AND CHRISTOPHER P. PORTER
Contents
1. Introduction 11.1. Notation 31.2. Computability theory 41.3. Core mathematical concepts 92. Early developments 122.1. Randomness via initial segment complexity 132.2. Martin-L¨of randomness 152.3. Schnorr’s contributions 173. Intermittent work: The late twentieth century 193.1. The contributions of Demuth and Kuˇcera 193.2. The contributions of Kurtz, Kautz, and van Lambalgen 204. Rapid growth at the turn of the century 234.1. The Turing degrees of random sequences 234.2. Chaitin’s Ω 244.3. Randomness-theoretic reducibilities 254.4. Other randomness notions and lowness for randomness 274.5. Effective notions of dimension 355. Recent developments 376. Acknowledgments 38References 381.
Introduction
The goal of this introductory survey is to present the major devel-opments of algorithmic randomness with an eye toward its historicaldevelopment. While two highly comprehensive books [26, 81] and onethorough survey article [21] have been written on the subject, our goalis to provide an introduction to algorithmic randomness that will beboth useful for newcomers who want to develop a sense of the field
Date : June 10,2019. quickly and interesting for researchers already in the field who wouldlike to see these results presented in chronological order.We begin in this section with a brief introduction to computabilitytheory as well as the underlying mathematical concepts that we willlater draw upon. Once these basic ideas have been presented, we will se-lectively survey four broad periods in which the primary developmentsin algorithmic randomness occurred: (1) the mid-1960s to mid-1970s,in which the main definitions of algorithmic randomness were laid outand the basic properties of random sequences were established, (2) the1980s through the 1990s, which featured intermittent and importantwork from a handful of researchers, (3) the 2000s, during which therewas an explosion of results as the discipline matured into a full-fledgedsubbranch of computability theory, and (4) the early 2010s, which webriefly discuss as a lead-in to the remaining surveys in this volume,which cover in detail many of the exciting developments in this laterperiod.We do not intend this to be a full reconstruction of the history ofalgorithmic randomness, nor are we claiming that the only significantdevelopments in algorithmic randomness are the ones recounted here.Instead, we aim to provide readers with sufficient context for appre-ciating the more recent work that is described in the surveys in thisvolume. Moreover, we highlight those concepts and results that will beuseful for our readers to be aware of as they read the later chapters inthis volume.Before we proceed with the technical material, we briefly commentupon several broader conceptual questions which may occur to thenewcomer upon reading this survey: What is a definition of algorithmicrandomness intended to capture? What is the aim of studying theproperties of the various types of randomness? And why are there somany definitions of randomness to begin with? It is certainly beyondthe scope of this survey to answer these questions in any detail. Herewe note first that more recent motivations for defining randomnessand studying the properties of the resulting definitions have becomeunmoored from the original motivation that led to early definitionsof randomness, namely, providing a foundation for probability theory(see, for example, [85]).This original motivation led to the desire for a definition of a ran-dom sequence satisfying the standard statistical properties of almostevery sequence (such as the strong law of large numbers and the law ofthe iterated logarithm). Martin-L¨of’s definition was the first to satisfythis constraint. Moreover, this definition proved to be robust, as it
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 3 was shown to be equivalent to definitions of randomness with a sig-nificantly different informal motivation: while Martin-L¨of’s definitionwas motivated by the idea that random sequences are statistically typi-cal, later characterizations were given in terms of incompressibility andunpredictability.With such a robust definition of randomness, one can inquire into justhow stable it is: if we modify a given technical aspect of the definition,is the resulting notion equivalent to Martin-L¨of randomness? As wewill see below, the answer is often negative. As there are a number ofsuch modifications, we now have a number of nonequivalent definitionsof randomness. Understanding the relationships between these notionsof randomness, as well as the properties of the sequences that satisfythem, is certainly an important endeavor.One might legitimately express the concern that such work amountsto simply concocting new definitions of randomness and exploring theirfeatures. However, not every new variant of every notion of randomnesshas proven to be significant. Typically, attention is given to definitionsof randomness that have multiple equivalent formalizations, or whichinteract nicely with computability-theoretic notions, or which provideinsight into some broader phenomenon such as the analysis of almostsure properties that hold in classical mathematics. Many such devel-opments are outlined in the surveys in this volume.1.1.
Notation.
Our notation will primarily follow [26] to make it eas-ier to cross-reference these results. The set of natural numbers willbe denoted by ω , and we will usually name elements of this set usinglowercase Latin letters such as m and n . Subsets of ω will be denotedby capital Latin letters such as A and B . Without loss of generality,we may associate an element of 2 ω (that is, an infinite binary sequence)with the subset of ω consisting of the places at which the infinite binarysequence is equal to 1. Finite binary strings, or elements of 2 <ω , willbe denoted by lowercase Greek letters such as σ and τ .We will often wish to discuss the subset of 2 ω whose elements allbegin with the same prefix σ ; we will denote this by [ σ ]. We willfurther extend this to an arbitrary subset S of 2 <ω :[ S ] = { A ∈ ω | σ (cid:22) A for some σ ∈ S } . The first n bits of a binary sequence X of length at least n , be it infiniteor finite, will be denoted by X ↾ n , and the length of a finite binary string σ will be denoted by | σ | . The concatenation of two binary strings σ and τ will be denoted by στ . FRANKLIN AND PORTER
Computability theory.
This section is intended for researchersin other areas of mathematics who are encountering computability the-ory for the first time and require an introduction to the underlyingconcepts; others may safely skip it. While each of Chapter 2 of [26]and Chapter 1 of [81] contains all the fundamental concepts of com-putability theory that will be required for this volume in more detail,researchers who wish to acquire a deeper understanding of the subjectare urged to consult Cooper [18], Odifreddi [83, 84], or Soare [91].Computability theory allows us to think about mathematical func-tions in an effective context. While there are several ways to formalizethe notions we are about to describe, including Turing machines, reg-ister machines, the λ -calculus, and µ -recursive functions, we will notfix such a formalization and will instead encourage the reader to thinkof the concepts we describe below in terms of the calculations that acomputer with potentially unlimited memory is capable of carrying outin a finite but unbounded amount of time.The most fundamental concept is that of a partial computable func-tion ϕ , which can be thought of as an idealized computer programthat accepts natural numbers as inputs and outputs natural numbersas well. We note that when a computer program is given an input (andthus when a partial computable function is), it may either return ananswer at some finite point, or stage , or never halt. If a partial com-putable function halts on every natural number (that is, it is actuallya total function), we simply call it a computable function .We now provide some necessary notation. If ϕ halts on input n , wewrite ϕ ( n ) ↓ ; if it does not, we write ϕ ( n ) ↑ . Furthermore, if ϕ halts oninput n and gives the output m within s stages, we write ϕ ( n )[ s ] = m to indicate the number of stages as well as the output.At this point, we make three observations. The first is that thereare countably many partial computable functions: each partial com-putable function is associated with a computer program, and we cannote that a computer program is a finite sequence of characters from afinite alphabet and that there are thus countably many such objects.The second is that we can list the partial computable functions in acomputable way simply by generating a list of all of the “grammaticallycorrect” programs and that thus we can speak about, for instance, the k th partial computable function ϕ k . While there are still only count-ably many (total) computable functions and thus we can list them aswell, it can be shown that we cannot list them in a computable waybecause no computer program is capable of identifying precisely thepartial computable functions that halt on every natural number. The EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 5 third is that there are computable bijections between the natural num-bers and the set of finite binary strings and the rational numbers, so wemay discuss partial computable functions from or to these sets withoutloss of generality.Now we can define special kinds of subsets of ω . A computably enu-merable ( c.e. ) set is one that is the range of a partial computablefunction, so we will often write W e for the set that is the range ofthe e th partial computable function ϕ e . We may think of ϕ e as enu-merating W e as follows: we first spend one step trying to compute ϕ e (0), then two steps trying to calculate each of ϕ e (0) and ϕ e (1), thenthree steps trying to calculate each of ϕ e (0), ϕ e (1), and ϕ e (2), and soon. If the calculation of ϕ e ( n ) ever halts, we will eventually discoverthis through this dovetailing of computations, and when we do, wewill enumerate its value into our set W e . Now we can use this idea ofset enumeration to formalize the concept of approximations to a set:for any c.e. set W e , we say that the approximation to it at stage s is W e [ s ] = { n | ( ∃ k ≤ s )[ n = ϕ e ( k )[ s ] ] } . This gives us a sequenceof approximations that converge to our c.e. set; in fact, we can write W e = S s ∈ ω W e [ s ]. We quickly observe that there are several equivalentdefinitions of a c.e. set; the other one that will be especially useful tous is that of a c.e. set as one that is the domain of a partial computablefunction.A set that is itself c.e. and has a c.e. complement is called a com-putable set. Just as a computable function halts on every input n , wecan get an answer to “Is n in A ?” for a computable set A for every n : tosee this, we observe that if A = W e and A = W i , then we can determinewhether n ∈ A by enumerating W e and W i as described above; n mustbe in one of them, and we simply note which. These two procedurescan again be dovetailed and performed by a single function that willgive us the characteristic function of A : χ A ( n ) = ( n ∈ A n A .
We can therefore show that the characteristic function of a computableset will be a computable function. Once again, there are countablymany c.e. sets and countably many computable sets. Each of these steps may be said to make up a single stage of the computationmentioned above. These steps will be defined differently based on our formalizationof computability theory: they may be the number of states a Turing machine hasbeen in or the number of reduction rules applied in the λ -calculus, but, at a lessformal level, we may think of ”spending n steps” as ”running the computer programfor n seconds.” FRANKLIN AND PORTER
Often, when we discuss randomness, we will talk about a sequenceof uniformly c.e. sets. Instead of simply requiring that each sequencein the set be c.e., we require that there be a single computable functionthat generates the entire sequence: h A i i i ∈ ω is uniformly c.e. if there isa computable function f such that A i is the range of the f ( i ) th partialcomputable function. Later, we will generalize this concept to otherclasses of sets that have some natural indexing: given such a class ofsets C , we can say that that we have a sequence of uniformly C sets h A i i i ∈ ω if there is a computable function f such that f ( i ) gives theindex of the i th set in the sequence.The final topic we must consider in order to understand the conceptsin algorithmic randomness we will discuss in this survey is that of oraclecomputation, or relativization. This requires us to consider Turingfunctionals , usually denoted by capital Greek letters such as Φ, whichrequire not only a natural number n as input but a sequence X thatserves as an oracle. These functionals can make use of the standardcomputational methods of partial computable functions and receiveanswers to finitely many queries of the sort “Is k in X ?” for use intheir computation, and they can be indexed as Φ , Φ , . . . just as thepartial computable functions can be indexed as ϕ , ϕ , . . . . When weuse the sequence X as an oracle for the Turing functional Φ, we writeΦ X . Finally, we note that our notation for stages of computationsusing Turing functionals carries over directly from that for stages ofcomputations using partial computable functions: we write Φ Xe ( n )[ s ]just as we would have written ϕ e ( n )[ s ].We say that A is computable from , or Turing reducible to B ( A ≤ T B ) if there is some Turing functional that, given B as an oracle, cancompute the characteristic function of A . We then use this reducibilityto form equivalence classes of sets that we call the Turing degrees : A and B have the same Turing degree if A ≤ T B and B ≤ T A . This allowsus to talk about properties related to a set’s computational strengthand not its particular members (for instance, we can talk about , theTuring degree of the computable sets, rather than “the Turing degreeof the empty set”). The Turing degrees will be denoted by boldfacelowercase Latin letters such as d .We will also use relativization to define new sets and Turing degrees.For instance, for each set A , we define A ′ to be { n | Φ An ( n ) ↓} and callit the jump of A . The jump of the empty set, ∅ ′ , is therefore the setof all natural numbers n such that { n | Φ ∅ n ( n ) ↓} , or, in other words,the indices of those Turing functions that halt given their own indexas input and no additional information. Its Turing degree, ′ , is thedegree of the famous Halting Problem (see Chapter II.2 of [83]). Since EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 7
A < T A ′ for any set A , we can develop an infinite ascending chain < T ′ < T ′′ < T . . . of Turing degrees. We note quickly that ingeneral, the n th jump of A is written as A ( n ) .Other, stronger reducibilities and their corresponding degree struc-tures have also been found to be useful in the study of randomness.Turing reductions are not required to converge on any input and, whenthey do converge, the size of the elements of the oracle queried duringthe computation is not necessarily bounded by any reasonable func-tion. The next type of reducibility is weak truth-table reducibility , or wtt -reducibility. A wtt -reduction Φ A is a Turing reduction in which thecomputation of Φ A ( n ), should it halt, is carried out by querying onlythe first f ( n ) bits of A for a given computable function f . Finally, thelast reducibility we will discuss in this survey is truth-table reducibility ,or tt -reducibility: a tt -reduction Φ A is one that will converge at everyinput given any oracle A . It can be seen that A ≤ tt B = ⇒ A ≤ wtt B = ⇒ A ≤ T B but that none of these implications reverse.As with Turing reducibility, we can also create equivalence classes ofmutually wtt - or tt -computable sets and study the wtt - and tt -degrees.We can then ask about the properties of all of these structures—theTuring degrees, for instance, form an upper semilattice—or types ofsubstructures within these structures, such as ideals, which are subsetsthat are both downward closed and closed under join, or an intervalbetween two degrees (for instance, the interval [ , ′ ] in the Turingdegrees). We can also discuss relationships between individual degrees;for example, we say that two degrees c and d form a minimal pair intheir degree structure if the only degree they both compute is .It is often useful to characterize a subset of ω in terms of the num-ber of unbounded quantifiers required to define it. A Σ n set A hasa membership relation that can be defined from a computable rela-tion R ( x , x , . . . , x n , y ) using n alternating quantifiers, starting withan existential one: y ∈ A if and only if ∃ x ∀ x ∃ x . . . Q n x n ( R ( x , x , . . . , x n , y )) .Q n will be an existential quantifier if n is odd and a universal quantifierif n is even. We observe that we can consider these quantifiers to bestrictly alternating. For instance, if we had the membership relation ∃ x ∃ x ( R ( x , x , y )) , While this was not the original definition of a tt -reduction, it is perhaps themost intuitive. The original definition of a tt -reduction explains its name and canbe found in Chapter III.3 of [83]. FRANKLIN AND PORTER we could use a computable pairing function p : ω → ω and expressthe same relation as ∃ x ∃ x ≤ x ∃ x ≤ x ( x = p ( x , x ) ∧ R ( x , x , y ))instead; note that the second and third existential quantifiers in thisformula are bounded and therefore that ∃ x ≤ x ∃ x ≤ x ( x = p ( x , x ) ∧ R ( x , x , y ))is a computable relation. Example 1.1. ∅ ′ is Σ : e ∈ ∅ ′ if and only if ∃ s (Φ ∅ e ( e )[ s ] ↓ ) . Example 1.2.
The set of all e such that W e is finite, Fin, is Σ : e belongs to Fin if and only if ∃ m ∀ s ∀ k ( k > m → k W e [ s ]) . A Π n set is defined in a similar way. It will also have n alternatingquantifiers, but this time starting with a universal quantifier; we notethat the complement of a Σ n set is a Π n set and vice versa. Example 1.3.
The set of all e such that ϕ e is total, Tot, is Π : e belongs to Tot if and only if ∀ k ∃ s ∃ m ( ϕ e ( k )[ s ] = m ) . Finally, we have the ∆ n sets, which we define to be those sets thatcan be characterized in both a Σ n and a Π n way. Since we often referto the class of Σ n sets simply as Σ n (and similarly for the classes of Π n and ∆ n sets), we can write ∆ n = Σ n ∩ Π n . These classes of sets—the Σ n , Π n , and ∆ n sets—form the arithmetichierarchy . We will note some fundamental facts relating these classesto the classes of sets we have already discussed (see Chapter IV.1 of[83]):(1) Σ = Π = ∆ = ∆ is simply the class of computable sets.(2) A set is Σ if and only if it is c.e.(3) A set is ∆ n exactly when it is Turing computable from ∅ ( n − .Furthermore, this hierarchy is proper: as long as n >
0, we willalways have ∆ n ( Σ n and ∆ n ( Π n .We may describe subsets of natural numbers in ways other than thearithmetic hierarchy, too. For instance, we have the high and low sets,which are defined based on the usefulness of the sets in question asoracles: EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 9 • a low set A ⊆ ω is one such that A ′ ≡ T ′ , and • a high set A ⊆ ω is one such that A ′ ≥ T ′′ .It is often more useful to define high sets in terms of the dominationproperty discovered by Martin [65]: A set is high if and only if it Turingcomputes a function f that dominates all computable functions (thatis, for each computable g , we have f ( n ) ≥ g ( n ) for some sufficientlylarge n ).Similarly, we may consider highness and lowness in the context of tt -reducibility: a set A is superhigh if A ′ ≥ tt ′′ and superlow if A ′ ≡ tt ′ .Another hierarchy of classes of sets that has proven useful is thegenericity hierarchy, which we can compare to a hierarchy of random-ness notions that we will see later. Instead of classifying sets based di-rectly on the complexity of their definitions, we classify them in termsof the complexity of the sets they are forced to either meet or avoid. Definition 1.4.
Let S be a set of finite binary strings. We say thatan infinite binary sequence A meets S if there is some σ ∈ S that isan initial segment of A . Furthermore, we say that A avoids S if thereis some initial segment of A that is not extended by any element of S . This gives us the framework necessary to define generic sets once wehave also defined a dense set of strings: a set of strings S is dense iffor every τ ∈ <ω , there is a σ ∈ S that extends it. Definition 1.5.
A sequence A is n -generic if it either meets or avoidsevery Σ n set and weakly n -generic if it meets every dense Σ n set. These classes of sequences once again form a proper hierarchy: every n generic is weakly n -generic, and every weakly ( n + 1)-generic is n -generic.Other classes of sets whose definitions are less closely tied to thearithmetical hierarchy have also been shown to be useful. For instance,we will make use of the sets of hyperimmune degree, which are defined,once again, using a domination property [77]: a set A has hyperim-mune degree if it computes a function that is not dominated by any computable function. The sets that do not have hyperimmune degreeare said to be of hyperimmune-free degree ; it is worth noting that thereare continuum many such sets and that all of them (except the com-putable sets) are Turing incomparable to ∅ ′ .1.3. Core mathematical concepts.
Several concepts from classicalmathematics will prove useful; we summarize them here. First wewill recall some fundamental facts about the Cantor space, 2 ω , as atopological space and as a probability space. In the Cantor space, our basic open sets have the form [ σ ] for σ ∈ <ω : as previously mentioned, [ σ ] is the set of elements of 2 ω thatextend σ . In fact, these sets are all clopen, and the clopen sets arethe finite unions of these [ σ ]’s. Now that we have done this, we candescribe the complexity of the open sets that we generate in this wayin terms of their generating sets. For instance, we can say that [ S ]is effectively open if S is c.e., and more generally, we can define theeffective Borel hierarchy as we defined the arithmetic hierarchy in theprevious subsection: [ S ] is Σ if it is the union of a c.e. sequence of basicopen sets, Π if it is the complement of a Σ set, Σ n for n > n − sets (that is, a computablesequence of Π n − classes), and so on. It is worth noting at this pointthat it is customary to refer to a subset of the Cantor space, especiallyone defined using this hierarchy, as a class.We can also establish the Lebesgue measure on the Cantor space:the measure of a basic open set [ σ ] is µ ([ σ ]) = 2 −| σ | , and the measureof any other measurable set is determined in the standard way.We will often identify the Cantor space with the unit interval (0,1)since these spaces are measure-theoretically isomorphic. Here, we makeuse of the interval topology on R , and our basic open sets are inter-vals ( a, b ). We will establish the Lebesgue measure in this contextas well, denoted throughout by µ once again [57]. This is the “stan-dard” measure on R , and the Lebesgue measure of such an interval is µ (( a, b )) = b − a for finite a and b .We can also describe elements of R using concepts from classicalcomputability theory. In general, we identify a real α in the unit inter-val with the element A of the Cantor space such that α = 0 .A . Thisallows us to say that such a real is computable if the corresponding A ∈ ω is; it is equivalent to say that a real α is computable if thereis a computable sequence of rationals h q i i converging to it such that | q n − α | < − n for every n [99, 100].Of course, we would like to extend this to computable enumerabilityas well: just as a c.e. set is one which we build up from ∅ by enu-merating elements into it, a left-c.e. real α is one that is effectivelyapproximable from below; that is, there is a computable, increasingsequence of rationals that converges to α . Correspondingly, a right-c.e.real is one that is effectively approximable from above. Equivalently,we could define these in terms of Dedekind cuts: a real α is left-c.e. if While some reals may have two representations, this does not matter: such a realwill be rational and therefore the corresponding possibilities for A are computableand will both have the same computational strength. EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 11 and only if its left cut { q ∈ Q | q < α } is a c.e. set and right-c.e. if itsright cut (defined similarly) is a c.e. set.We can extend the notions of computability and computable enumer-ability once more, this time to functions from ω or another computableset to R . To do so, we need to be able to talk about a sequence of realsthat is uniformly computable or left-c.e. These definitions are builtdirectly from those of uniformly computable and uniformly c.e. sets: Definition 1.6.
A uniformly computable (left-c.e.) sequence of reals isa sequence h r i i i ∈ ω such that there is a computable function f : ω → Q such that for a given i , h f ( i, n ) i n ∈ ω is a computable (left-c.e.) approx-imation for r i . Definition 1.7.
A function from a computable set to R is computableif its values are uniformly computable reals, and it is computably enu-merable if its values are uniformly left-c.e. reals. Now we turn our attention to the general mathematical ideas we willneed to study randomness properly and place them in the context ofcomputability theory. The first concept, that of a martingale, will beuseful when we discuss the predictability framework for randomness.In general, a martingale is a certain type of stochastic process, but herewe need only think of it as a type of betting strategy on finite binarystrings.
Definition 1.8. [59]
A function d : 2 <ω → R ≥ is a martingale if itobeys the fairness condition d ( σ ) = d ( σ
0) + d ( σ for all σ ∈ <ω . We say that a martingale d succeeds on A ∈ ω if lim sup n d ( A ↾ n ) = ∞ , and the success set of d , which we will write as S [ d ] , is the set of allsequences upon which d succeeds. We can think of d ( σ ) as expressing the amount of capital that wehave after betting on the initial string σ using the strategy inherentin d (so d ( hi ) is our capital before any bets are placed) and S [ d ] asthe set of sequences that we can make arbitrarily much money bettingon if the payout is determined by d . As shown by Ville, P ⊆ ω hasLebesgue measure zero if and only if there is some martingale d suchthat P ⊆ S [ d ] [102].A computable or c.e. martingale is simply a martingale that is, re-spectively, a computable or c.e. function. To define the last of the core mathematical concepts in this section,Hausdorff dimension, we must consider a variation of Lebesgue measureon the Cantor space.
Definition 1.9. [40]
Let ≤ s ≤ . The s -measure of a basic open set [ σ ] is µ s ([ σ ]) = 2 − s | σ | . This will allow us to define dimensions of subsets of Cantor space.
Definition 1.10. An n -cover of a subset S of Cantor space is a set ofstrings C ⊆ ≥ n such that S ⊆ [ C ] . We define H sn ( S ) = inf (X σ ∈ C µ s ([ σ ]) | C is an n -cover of S ) . The s -dimensional outer Hausdorff measure of S is H s ( S ) = lim n H sn ( S ) , and the Hausdorff dimension of S is dim H ( S ) = inf { s | H s ( S ) = 0 } . We note that these definitions are specialized here to Cantor space. Fordetails about these notions in more general settings, see, for instance,the monographs [86] or [27].We further note that the effective version of Hausdorff dimensionthat we will discuss later, originally due to Lutz (see, e.g. [61]), isinstead based on a type of betting strategy called an s -gale ratherthan n -covers. In fact, Lutz provided an equivalent characterization ofclassical dimension in terms of s -gales and then gave an effectivizationof this alternative notion.2. Early developments
During the 1960s and early 1970s, the foundation of much of thecurrent work in algorithmic randomness was laid. Earlier work byvon Mises, Ville, Church, and Wald in the first half of the twentiethcentury highlighted the problem of defining the notion of an individualrandom sequence, but no satisfactory definition of a random sequencewas provided during this time. However, several promising definitionsof randomness emerged in the work of (i) Kolmogorov in the mid-1960s,(ii) Martin-L¨of in the late 1960s, and (iii) Schnorr and Levin in the early1970s.We will not rehearse the broader philosophical concerns that moti-vated the search for a definition of an individual random sequence (forsuch an account, see [85]). For our purposes, it suffices to note the key
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 13 desiderata for a definition of randomness that came into focus duringthe first half of the twentieth century: (1) a random sequence is onethat should not be contained in any null sets that are “nicely” definablein some way (clarified in part by the work of Wald and Ville on vonMises’ original definition of randomness) and (2) one should formalizethese “nicely” definable null sets in terms of effectivity (as suggestedby Church’s introduction of concepts of computability theory to thetask of defining randomness).2.1.
Randomness via initial segment complexity.
The first signif-icant breakthroughs in the task of offering such a definition of random-ness came in the work of Kolmogorov and, independently, Solomonoff[92, 93], who provided different accounts of the initial segment com-plexity of sequences. We will focus here on Kolmogorov’s contribution.Kolmogorov did not set out to provide a definition of random se-quences in terms of some class of effective null sets. Rather, his aimwas to provide a notion of randomness for finite strings (again, for themotivation behind this aim, see [85]), which was, in turn, defined interms of Kolmogorov complexity. Such a definition is found in Kol-mogorov’s 1965 paper “Three approaches to the quantitative definitionof information” (see [48] for the original English translation of the ar-ticle). For a fixed partial computable function M : 2 <ω → <ω , oftencalled a machine , and some τ ∈ <ω , we can consider all strings σ ∈ <ω such that M ( σ ) ↓ = τ ; if we consider each such string σ to contain infor-mation about how to produce the string τ (via the function M ), thenthe shortest such string σ provides the minimal amount of informationnecessary for producing τ . This gives us a measure of the complexityof τ : Definition 2.1.
The plain Kolmogorov complexity of a string τ rela-tive to the machine M is C M ( τ ) = min {| σ | | M ( σ ) ↓ = τ } , assuming there is some τ ∈ <ω such that M ( σ ) ↓ = τ ; if no such τ exists, we set C M ( τ ) = ∞ . The dependence of the above definition on the function M is undesir-able, for clearly, different choices of M produce different values C M ( τ )for each fixed τ ∈ <ω . Kolmogorov addressed this problem by defin-ing his complexity measure in terms of a universal partial computable Here we do not exactly follow the details of Kolmogorov’s presentation; heinitially defines conditional Kolmogorov complexity in his paper. function: If h M e i e ∈ ω is a fixed computable enumeration of partial com-putable functions from 2 <ω to 2 <ω , then we can define a universalpartial computable U : 2 <ω → <ω by setting U (1 e σ ) = M e ( σ ) foreach e ∈ ω and σ ∈ <ω (assuming that M e ( σ ) ↓ ; otherwise, we set U (1 e σ ) ↑ ). Then we define C ( σ ) to be C U ( σ ) for each σ ∈ <ω .The value in defining complexity in terms of a universal partial com-putable function is seen in the following result, which is often referredto as the invariance theorem: Theorem 2.2 (Kolmogorov, [48]) . For every partial computable M :2 <ω → <ω , there is some c ∈ ω such that for every σ ∈ <ω , C ( σ ) ≤ C M ( σ ) + c. Note that the invariance theorem implies that the complexity valuesdetermined by two different choices of universal partial computablefunctions U and U ′ yield complexity measures that only differ by afinite fixed constant c : | C U ( σ ) − C U ′ ( σ ) | ≤ c for every σ ∈ <ω .From Kolmogorov’s notion of complexity, how do we define the ran-domness of binary strings? Let us first consider two informal examples,a string σ consisting of 50,000 1’s and a string σ of length 50,000 ob-tained by the tosses of a fair coin. The shortest program needed togenerate σ has length considerably shorter than 50,000, as such a pro-gram only needs to specify that the symbol ‘1’ is to be repeated 50,000times. However, any program that generates σ must, with high prob-ability, have most, if not all, of the entire string σ hardwired intothe program, as most strings obtained by tossing a fair coin containvery few regularities and thus cannot be compressed. Following thisexample, the idea behind Kolmogorov’s definition of randomness is toidentify randomness with incompressibility. We now turn to the formaldetails.Kolmogorov did not explicitly define randomness in his 1965 paper,but we find the following definition in the 1969 paper [49]. Considerthose strings σ such that C ( σ ) ≥ | σ | . As there is no program togenerate such a string σ that is shorter than σ , the most efficient wayto produce such a string via U is simply to give it as input and copy itdirectly to output.More generally, for a fixed c ∈ ω , we can consider the set of allstrings that cannot be compressed by more than c bits. Let us sayof a string σ ∈ <ω that it is c - incompressible if C ( σ ) ≥ | σ | − c . For n ≥ c , since the number of strings of length strictly less than n − c is equal to 1 + 2 + 2 + . . . + 2 n − c − = 2 n − c −
1, there are at least2 n (1 − − c ) − c -incompressible strings of length n , thereby yielding aplethora of random strings (for sufficiently large n ). EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 15
Martin-L¨of randomness.
It is natural to try to extend this def-inition of randomness for finite binary strings to infinite binary se-quences, a task that Martin-L¨of sought to accomplish. Indeed, we candefine a sequence A ∈ ω to be c -incompressible if there is some c ∈ ω such that C ( A ↾ n ) ≥ n − c for every n ∈ ω . However, as shown byMartin-L¨of (see [26, Theorem 3.1.4]), no sequence is c -incompressiblefor every c ∈ ω . In fact, he shows that for every A ∈ ω , there areinfinitely many n such that C ( A ↾ n ) ≤ n − log( n ) (where log( n ) is thebinary logarithm).Instead of seeking to define random infinite sequences by modifyingthe underlying notion of Kolmogorov complexity, Martin-L¨of in [66]took an alternative approach, defining randomness in terms of certaineffective statistical tests. Martin-L¨of aimed to formalize the notionof statistical tests used in hypothesis testing, where each such testhas a critical region such that, given a sequence of observations con-tained in this region, we reject the null hypothesis at a certain levelof significance. In Martin-L¨of’s formalization, such a test is given bya computable sequence of effectively open sets h U i i i ∈ ω , each of whichcorresponds to a critical region given by a certain level of significance.Moreover, if a sequence is random, then it should not be contained inthe critical regions at every level of significance; eventually, we shouldfind a critical region at some level of significance that the random se-quence avoids. Thus we have the following definition: Definition 2.3. A Martin-L¨of test is a sequence of uniformly Σ classes h U i i i ∈ ω such that µ ( U i ) ≤ − i for every i ∈ ω . A sequence A ∈ ω issaid to pass a Martin-L¨of test h U i i i ∈ ω if A / ∈ T i ∈ ω U i , and a sequenceis Martin-L¨of random if it passes every Martin-L¨of test.
Martin-L¨of also proved in [66] that there is a universal
Martin-L¨oftest: a test h U i i i ∈ ω such that for all Martin-L¨of tests h V i i i ∈ ω , T i ∈ ω V i ⊆ T i ∈ ω U i . This provides an easy argument that the class of Martin-L¨of random sequences has measure 1: since the sequences that arenot Martin-L¨of random are precisely those that are not contained in T i ∈ ω U i for a universal test h U i i i ∈ ω , the class of Martin-L¨of randomsequences must be conull. Moreover, Martin-L¨of showed that commonlaws of probability such as the strong law of large numbers and thelaw of the iterated logarithm are satisfied by all Martin-L¨of randomsequences.Martin-L¨of also investigated the extent to which one can characterizeMartin-L¨of randomness in terms of initial segment complexity, showingthat for every sequence A such that, for some c ∈ ω , C ( A ↾ n ) ≥ n − c holds for infinitely many n ∈ ω , A is Martin-L¨of random (see [67]). He was, however, unable to establish the converse, which in fact does nothold, as shown much later independently by J. Miller [73] and Nies,Stephan, and Terwijn [82] (see Section 4.4 below for more details).The problem of providing an initial segment complexity character-ization of randomness for infinite sequences was solved independentlyby Levin and Schnorr, who worked with an alternative notion of Kol-mogorov complexity, namely, prefix-free Kolmogorov complexity. Re-call that a set S ⊆ <ω is prefix-free if for every σ ∈ S , if τ ≻ σ , then τ / ∈ S . We can extend the notion of being prefix-free to a machine M : 2 <ω → <ω by stipulating that M is prefix-free if the domain of M is a prefix-free subset of 2 <ω . Both Levin [58] and Chaitin [15] in-dependently defined prefix-free Kolmogorov complexity in terms of aprefix-free universal machine. Note that one can effectively enumer-ate the collection of all prefix-free machines h M e i e ∈ ω and hence we candefine U as we did in Subsection 2.1 above. Definition 2.4.
The prefix-free Kolmogorov complexity of a string τ is given by K ( τ ) = min {| σ | | U ( σ ) ↓ = τ } , where U is a universal, prefix-free machine. For a prefix-free machine M , we write K M to stand for prefix-freecomplexity relative to the machine M . One can readily verify that re-sults such as the invariance theorem and the existence of incompressiblestrings still hold for prefix-free complexity.With this modified notion of Kolmogorov complexity, Levin andSchnorr independently proved the following: Theorem 2.5 (Levin [58], Schnorr [15]) . A ∈ ω is Martin-L¨of randomif and only if there is some c ∈ ω such that for all n ∈ ω , K ( A ↾ n ) ≥ n − c. As part of the proof of the Levin-Schnorr theorem, it is standard toprove that for every A that is not Martin-L¨of random, there is someprefix-free machine M such that for every c ∈ ω , there is an n ∈ ω suchthat K M ( A ↾ n ) < n − c (where K M is prefix-free complexity relativeto the machine M ). More specifically, given a Martin-L¨of test that A fails to pass, we explicitly construct such a machine M . This is donevia what was earlier referred to as the Kraft-Chaitin theorem but iscurrently called the KC-theorem (as in [26]) or the machine existencetheorem (as in [81]). The theorem is as follows; see the footnote inSection 3.6 of [26] for an explanation of its origins. Schnorr is given credit for this result in Chaitin’s [15].
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 17
Theorem 2.6.
Given a computable enumeration of pairs h ( n i , σ i ) i ⊆ ω × <ω such that P i ∈ ω − n i ≤ , there is a prefix-free machine M and a sequence of strings h τ i i such that for every i , M ( τ i ) = σ i and | τ i | = n i . To see the details of how this result is used in the proof of the Levin-Schnorr theorem, see [81, Theorem 3.2.9].The Levin-Schnorr theorem thus establishes the equivalence of acharacterization of randomness in terms of statistical properties witha characterization of randomness in terms of incompressibility. In fact,the proof of the Levin-Schnorr theorem shows that (i) if a sequenceis compressible, then there is a statistical test such that its criticalregions contain the sequence at all levels of significance (each level cor-responding to how compressible the sequence is), and (ii) if a sequenceis statistically atypical, contained in all critical regions at every levelof significance of some statistical test, then we can define a machineto compress this sequence (and in fact, every sequence contained in allsuch critical regions).2.3.
Schnorr’s contributions.
Shortly after the publication of Martin-L¨of’s definition of randomness, Schnorr developed an alternative ap-proach to algorithmic randomness using martingales as originally de-fined by Ville. This approach constitutes the third major framework foralgorithmic randomness: the unpredictability framework. Not only didSchnorr provide a characterization of Martin-L¨of randomness in termsof unpredictability, but he also introduced two alternative definitions ofrandomness, computable randomness and Schnorr randomness in [88]and [89]; these definitions make use of computable and c.e. martingalesas described in Section 1.3.
Theorem 2.7 (Schnorr, [88, 89]) . A sequence is Martin-L¨of random ifand only if no c.e. martingale succeeds on it.
In other words, a sequence is Martin-L¨of random exactly when no c.e.martingale can predict it sufficiently well to make an arbitrarily largeamount of money on it. We quickly note that the standard proof ofthis theorem involves interpreting a Martin-L¨of test as a c.e. martingaleand that therefore, since there is a universal Martin-L¨of test, there isalso a universal c.e. martingale.As noted above, in the same works, we find two new randomnessnotions, namely, computable randomness and the notion now knownas Schnorr randomness. The definition of Schnorr randomness requires the definition of an order function , a nondecreasing and unboundedcomputable function from ω to ω . Definition 2.8.
A sequence is computably random if no computablemartingale succeeds on it.
Definition 2.9.
A sequence A is Schnorr random if, for every com-putable martingale d and order function p , lim sup n d ( A ↾ n ) p ( n ) < ∞ . In other words, a sequence is computably random if no computablemartingale can predict it well enough to make an arbitrarily largeamount of money betting on it, and a sequence is Schnorr random if,while a computable martingale may be able to predict it well enoughto make arbitrarily much betting on it, we can bound the rate at whichthis occurs.In [89], Schnorr also showed that Schnorr randomness could be char-acterized using a more effective form of Martin-L¨of tests: rather thanrequire the measure of each test component to be effectively approx-imable from below, one should require it to be approximable from bothbelow and above.
Definition 2.10.
A Schnorr test is a Martin-L¨of test h V i i i ∈ ω such thatthe measure of the i th test component, µ ( V i ) , is exactly − i . We note here that it is actually only necessary that the measures ofthe test components be uniformly computable; however, it is customaryto use 2 − i as the measure of the i th component.The primary difference between Martin-L¨of randomness and the twoalternative definitions that Schnorr proposed is that the latter two def-initions are more constructive than the former. First, Schnorr objectedto the notion of a c.e. martingale, questioning why one should only re-quire that the values of a martingale be computably approximable frombelow but not from above. Thus, in the move from c.e. martingales tocomputable martingales, we arrive at the definition of computable ran-domness. Second, Schnorr rejected the standard notion of martingalesuccess, pointing out the possibility of winnings of a gambler increas-ing so slowly that this success goes undetected. A better approach, on It is usual at the time of writing to require only that an order function beincreasing and unbounded. For instance, this is the definition used in Downeyand Hirschfeldt [26]. However, when Schnorr originally defined order functions, herequired that they be computable as well, and since we will not consider noncom-putable order functions in this survey, we prefer to use his original definition forsimplicity’s sake.
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 19
Schnorr’s view, is to require a computable lower bound on the successof a martingale, so that, at least in principle, the gambler can recog-nize that her winnings are growing without bound. In the move fromthe standard notion of success to his more constructive alternative, wethen arrive at the definition of Schnorr randomness.While Schnorr randomness and computable randomness were thefirst variations on Martin-L¨of randomness to be considered, many oth-ers have been presented since then. While several will be discussed inlater sections, two general approaches will not appear in this survey.One is higher randomness, which allows randomness notions definedin terms of effective descriptive set theory and is discussed in Monin’ssurvey. Another is resource-bounded randomness, which allows ran-domness notions whose definitions involve time and space bounds andis discussed in Stull’s survey.3.
Intermittent work: The late twentieth century
After the initial flurry of results established primarily by Martin-L¨of, Schnorr, and Levin in the late 1960s and early 1970s, intermittentwork on algorithmic randomness was carried out in the mid-1970s, the1980s, and the early 1990s. We highlight some of these developments,starting with the work of Demuth and Kuˇcera.3.1.
The contributions of Demuth and Kuˇcera.
Working in thetradition of the Markov school of constructive analysis, the Czech math-ematician Osvald Demuth made a number of important contributionsto the study of algorithmic randomness. As laid out in the survey [53],not only did Demuth independently discover the notions of Martin-L¨ofrandomness and computable randomness, but he also developed severalnotions of randomness that have proven to be fruitful in the decadessince then. For a survey on Demuth’s use of notions of randomnessin constructive analysis, particularly the notions now called Demuthrandomness and weak Demuth randomness, see [53]. For more on De-muth’s contributions to computable analysis, see both the Rute surveyand the Porter survey in this volume.Kuˇcera, who collaborated with Demuth on a number of projects incomputable analysis and effective notions of genericity, also proved anumber of key results in algorithmic randomness in the mid-1980s. Wefocus primarily on the paper [51], which contains several results thatare now considered to be classical theorems in algorithmic randomness.
Theorem 3.1 (Kuˇcera [51]) . For every Π class P with µ ( P ) > andevery Martin-L¨of random A ∈ ω , there is some tail B of A , that is, a B such that A = σB for some σ ∈ <ω , such that B ∈ P . In particular,the collection of Turing degrees of members of P includes every degreecontaining a Martin-L¨of random sequence. As noted in [9], another way to formulate the first part of this theo-rem is that for any Π P class of positive measure and any Martin-L¨ofrandom sequence X , we can apply the shift operator (each applicationof which drops the initial bit of a sequence) to X a finite number oftimes to eventually obtain an element of P . Seen in this light, thisresult relates ergodic theory and algorithmic randomness, a topic thathas been fruitfully explored, as laid out in Towsner’s survey in thisvolume.Next, recall that a sequence A has diagonally noncomputable degree (or DNC degree ) if there is some f ≤ T A such that f ( i ) = ϕ i ( i ) for every i ∈ ω . Sequences of DNC degree are closely related to completionsof Peano arithmetic, which are studied in the context of the G¨odelincompleteness phenomenon. In this same paper [51], Kuˇcera provedthe following (in a slightly different form): Theorem 3.2.
Every Martin-L¨of random sequence has DNC degree.
Lastly, we also find the following in [51]:
Theorem 3.3.
For every Turing degree a ≥ ′ , there is some Martin-L¨of random A ∈ a . Implicit in the proof of this last result is the theorem that for every B ∈ ω , there is some Martin-L¨of random A ∈ ω such that B ≤ T A .This result is now referred to as the Kuˇcera-G´acs theorem, as it wasindependently established by G´acs [36], who actually showed that thereis a computable bound on the use of the computation of B from A , inother words, that B ≤ wtt A .Kuˇcera’s result is counterintuitive: How can every sequence thatcomputes the halting problem be Turing equivalent to a Martin-L¨ofrandom sequence? It turns out that this phenomenon is the exceptionrather than the rule, for as Sacks’ Theorem states [87], for every A ∈ ω ,there are measure zero many sequences X ∈ ω such that A ≤ T X .Thus, only measure zero many Martin-L¨of random sequences are com-putationally strong enough to compute the halting problem. Moreover,as we will see in Section 4.1 below, this phenomenon is incompatiblewith stronger notions of randomness.3.2. The contributions of Kurtz, Kautz, and van Lambalgen.
Now we turn to the work of Kurtz, Kautz and van Lambalgen. WhileKuˇcera focused on situating the Martin-L¨of random sequences within
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 21 the Turing degrees, Kurtz created true hierarchies of randomness no-tions for the first time in his 1981 dissertation [55]: the n -random andweakly n -random sequences. Definition 3.4. A ∈ ω is n -random if for every sequence h S i i i ∈ ω ofuniformly Σ n subsets of ω such that P i ∈ ω µ ( S i ) is finite, A belongs toonly finitely many S i ’s. A is weakly n -random if it is not contained inany Π n class of Lebesgue measure zero, or equivalently, if it belongs toevery conull Σ n subset of ω . The relationships between these hierarchies that we would expect tohold based on their names actually do: every n -random set is weakly n -random, every weakly ( n + 1)-random set is n -random, and theseinclusions are strict for all n ≥ We can also note that a resultof Solovay shows us that the 1-random sets are precisely the Martin-L¨of random sets [94], giving us another type of test characterizationfor Martin-L¨of randomness. Furthermore, weak 1-randomness has beenextensively studied in its own right and is now known as
Kurtz random-ness . For instance, Kurtz showed that every hyperimmune degree is infact Kurtz random. This notion of randomness is therefore very weak,and it can reasonably be claimed that it is not a “proper” randomnessnotion. Ten years later, Kautz would continue Kurtz’s investigations of n -randomness and weak n -randomness as well as his own measure-theoreticpursuits in his dissertation [44]. In particular, he showed that n -randomness can be characterized using more traditional methods suchas tests of the same form as Martin-L¨of tests, Kolmogorov complex-ity, and martingales. We define a Σ n -test to be a sequence h V i i i ∈ ω ofuniformly Σ n classes such that µ ( V i ) ≤ − i for all i . We also considerrandomness relative to an oracle for the first time in this survey. Whenwe say that A is random relative to B (for a given randomness notion),we mean in general that access to B does not allow us to derandomize A . For n -randomness, this is straightforward: we simply use ∅ ( n − asan oracle for the universal machine, the universal martingale, or thecomponents of the universal test that we are using to determine therandomness of A . However, for other randomness notions, it is far less Kurtz himself only proved that there are n -random degrees that are not weakly( n + 1)-random; the other separation was proven later by Kautz [44]. We note that randomness notions that result in large classes of random se-quences are often called weak , even though the tests used to describe such notionsare necessarily more restricted: it is easier to pass every test in a smaller class oftests. Similarly, a randomness notion describing a relatively small class of randomsequences is called strong . straightforward; see Franklin’s survey in this volume for a discussionof the issues that arise.We will write K A ( σ ) as the prefix-free Kolmogorov complexity of σ relative to A . Theorem 3.5 (Kautz, [44]) . For A ∈ ω , the following are equivalent: (1) A is n -random. (2) For every Σ n test h V i i , A T i V i . (3) A is 1-random relative to ∅ ( n − . (4) For some c ∈ ω , K ( n − ( A ↾ m ) ≥ m − c for all m . (5) No Σ n martingale succeeds on A . Kurtz also studied the Turing jumps of n -randoms; for instance, heshows that the class { A | A ( n − ≥ T ∅ ( n ) } is null and does not containany ( n + 1)-random sets.Finally, Kautz considered randomness with respect to computablemeasures in general and not just the Lebesgue measure, allowing himto define n -randomness with respect to an arbitrary computable mea-sure ν . He further examined the extent to which 1-randomness is pre-served under effective transformations, and how to translate betweenrandomness with respect to various computable measures. Porter con-tinues this discussion of randomness with respect to different measures,both computable and noncomputable, in his survey.Kautz also demonstrated that any computable subset of an n -randomsequence is itself n -random and that, in particular, a computable sub-set of an n -random sequence cannot be computed from “the rest” ofthe sequence and is in fact n -random relative to it [44]. In contrast,he also showed that for every n ≥
1, there is a weakly n -random set A ⊕ B such that A is not weakly n -random relative to B . This workis parallel to that of van Lambalgen who, in [101], precisely character-ized the circumstances under which the join of two Martin-L¨of randomsequences would itself be random: Theorem 3.6 (van Lambalgen, [101]) . For
A, B ∈ ω , the followingare equivalent: (1) A ⊕ B is n -random. (2) A is n -random and B is n -random relative to A . As an immediate consequence, we get that a sequence A is n -randomrelative to an n -random sequence B if and only if B is n -random rel-ative to A . This symmetry of relative randomness has proven to bean extremely useful tool in the study of Martin-L¨of randomness. Amore detailed discussion of this theorem, specifically in the context of EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 23 alternative notions of randomness, can be found in Franklin’s surveyin this volume.4.
Rapid growth at the turn of the century
During the early 2000s, interest in algorithmic randomness grew con-siderably in the computability theory community. Here we highlightsome of the more significant results.4.1.
The Turing degrees of random sequences.
As discussed inSection 2, Schnorr introduced two alternatives to Martin-L¨of random-ness, namely Schnorr randomness and computable randomness. Sepa-rations of these notions were initially established by Schnorr [88] andWang [104, 105]. The results were extended by Nies, Stephan, and Ter-wijn [82], who proved that these notions could be separated in preciselythe high degrees.
Theorem 4.1 (Nies, Stephan, and Terwijn [82]) . The following areequivalent for A ∈ ω : (i) A has high Turing degree (that is, A ′ ≥ T ∅ ′′ ). (ii) There is some B ≡ T A that is computably random but notMartin-L¨of random. (iii) There is some C ≡ T A that is Schnorr random but not com-putably random. Nies, Stephan, and Terwijn also proved that this theorem holds whenwe consider only left-c.e. reals, so we can separate these notions evenin that more limited context.Another degree-theoretic result established in this period by Stephancharacterizes the Martin-L¨of random sequences that are Turing com-plete.
Theorem 4.2 (Stephan [96]) . For every Martin-L¨of random sequence A ∈ ω , A ≥ T ∅ ′ if and only if A has PA degree, that is, A computes aconsistent completion of Peano arithmetic. Recall from the previous section that Kuˇcera proved that every Tur-ing degree above ′ contains a Martin-L¨of random sequence. FromStephan’s results, these are precisely the PA degrees that contain aMartin-L¨of sequence. Later, Franklin and Ng would characterize theseusing a new randomness notion, difference randomness [32].Lastly, a key result on the Turing degrees of Martin-L¨of randomsequences relative to various oracles, known as the XYZ theorem, isthe following:
Theorem 4.3 (J. Miller and Yu [76]) . For a Martin-L¨of random se-quence X , if X ≤ T Y , where Y is Martin-L¨of random with respect tosome Z ∈ ω , then X is also Martin-L¨of random with respect to Z . The phenomenon described in the
XYZ theorem can be seen as akind of preservation of randomness: if we map a random sequence Y to a random sequence X , this map preserves relative randomness, inthe sense that if an oracle Z does not detect Y as non-random, then itdoes not detect X as non-random.It immediately follows from this result that every Martin-L¨of randomsequence Turing reducible to an n -random sequence for n ≥ n -random.4.2. Chaitin’s Ω . In [15], Chaitin introduced his celebrated numberΩ, the halting probability, which is defined to beΩ = X U ( σ ) ↓ −| σ | , where U is a fixed universal, prefix-free machine. That is, Ω is theprobability that U halts on some initial segment of an infinite inputsequence.Given that this definition is dependent on the choice of universalmachine U , it is more accurate to define the family of Ω-numbers Ω U for every universal prefix-free U . However, we will still refer to a fixedΩ-number as Ω, with the understanding that any property that weattribute to this Ω-number will hold of all Ω-numbers.Clearly Ω is left-c.e., since it is the limit of a computable sequenceof partial sums determined by running U for only finitely many steps.More significantly, Chaitin proved that Ω is both Turing complete andMartin-L¨of random. That is, Ω encodes all of the information of thehalting problem, and yet the bits of Ω are arranged in such a way asto pass all Martin-L¨of tests.Solovay, in unpublished work in the 1970s [94], introduced a newreducibility for left-c.e. reals that is now known as Solovay reducibility .Given left-c.e. reals α and β , α is Solovay reducible to β , written α ≤ S β , if there is a c ∈ ω and a partial computable function f : Q → Q such that for every q ∈ Q such that q < β , f ( q ) is defined, f ( q ) < α ,and α − f ( q ) < c ( β − q ). That is, from any rational less than β that is sufficiently close to β , we can compute a rational less than α that is sufficiently close to α . One consequence of this definition, theformal details of which we do not consider here, is that any computablesequence of rationals converging to β from below can be effectively EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 25 transformed into a computable sequence of rationals converging to α from below at the same rate (up to a multiplicative constant).Solovay proved that for every left-c.e. real α , α ≤ S Ω. Calude,Hertling, Khoussainov, and Wang [13] later studied Ω-like left-c.e. reals,where a left-c.e. real β is Ω -like if for every left-c.e. real α , α ≤ S β . Themain result in [13] is that if β is Ω-like, then there is some universal,prefix-free machine U such that β = Ω U . Thus, the collection of Ω-numbers and the collection of Ω-like reals coincide.The final piece in understanding the relationship between left-c.e.reals and Solovay reducibility was provided by Kuˇcera and Slaman,who proved the following: Theorem 4.4 (Kuˇcera and Slaman [54]) . If α is left-c.e. and Martin-L¨of random, then α is Ω -like. Summing up, we have:
Corollary 4.5.
For a left-c.e. real α , the following are equivalent: (i) α = Ω U for some universal, prefix-free machine U . (ii) β ≤ S α for every left-c.e. real β , that is, α is Ω -like. (iii) α is Martin-L¨of random. For a survey on more recent work on Ω, see Barmpalias’s article inthis volume.4.3.
Randomness-theoretic reducibilities.
In addition to Solovayreducibility, a number of other reducibilities defined in terms of algo-rithmic randomness were introduced in the period under consideration.We briefly survey four of them.First, two reducibilities originally introduced by Solovay [94], givenin terms of the comparison of the initial segment complexities of thesequences, are defined as follows: for some constant c , for every nA ≤ K B ⇔ K ( A ↾ n ) ≤ K ( B ↾ n ) + c and A ≤ C B ⇔ C ( A ↾ n ) ≤ C ( B ↾ n ) + c. Thus, for each of these reducibilities, the more random a sequence is,the higher it will be with respect to the orderings ≤ K and ≤ C . Theresulting degree structures defined in terms of these reducibilities, the K -degrees and the C -degrees, respectively, have been studied in, forinstance, [23], [72], [76], and [74].Some key results on the structures of both the K -degrees and the C -degrees follow: • For α, β ∈ ω , α ≤ C β implies α ≤ T β , but α ≤ K β doesnot imply α ≤ T β (the former is attributed to Stephan in [26,Theorem 9.7.1]). • Both the C -degrees and the K -degrees of left-c.e. reals formupper semilattices with the join operation given by addition[23]. • There is an uncountable K -degree (attributed to J. Miller in[4]). • There is a minimal pair in the K -degrees [19], [72], [7]. • There is a minimal C -degree [72]. • There is a pair of K -degrees with no upper bound [76].Two additional reducibilities, which are comparisons of the sequences’strengths as oracles, are: A ≤ LR B ⇔ MLR B ⊆ MLR A where, for X ∈ ω , MLR X stands for the collection of Martin-L¨of ran-dom sequences relative to X , and A ≤ LK B ⇔ ∃ c ∀ σ ( K B ( σ ) ≤ K A ( σ ) + c ) . Both ≤ LR and ≤ LK were introduced by Nies in [80]. Informally, theidea behind these reducibilities is that if A is sufficiently powerful todetect that some object (a sequence A ∈ ω in the case of ≤ LR and astring σ ∈ <ω in the case of ≤ LK ) is not random, then B also detectsthat this object is not random.Although ≤ LR and ≤ LK concern the relative power of oracles fordetermining the randomness of different kinds of objects (sequencesand strings), remarkably, these reducibilities coincide. Theorem 4.6 (Kjos-Hanssen, J. Miller, and Solomon [45]) . For
A, B ∈ ω , A ≤ LR B if and only if A ≤ LK B. Other significant results on the LR -degrees are: • The set { A | A ≤ LR ∅ ′ } is uncountable [6]. • The set { A | A ≤ LR B } is countable if and only if Ω is Martin-L¨of random with respect to B [5]. • There is some
A < T ∅ ′ such that ∅ ′ ≤ LR A [17] (see also [90,Theorem 6.7]).These reducibilities allow us, as we will see in the next section, tocalibrate sequences in terms of how far they are from being random.For a thorough discussion of the above-discussed reducibilities, as wellas others, see [4]. EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 27
Other randomness notions and lowness for randomness.
As the Turing degrees of Martin-L¨of random sequences became betterunderstood, there were newfound emphases on sequences that haveproperties that are antithetical to properties that Martin-L¨of randomsequences have and on understanding other randomness notions morecompletely. The first property is based on the reducibility ≤ K discussedabove and states that a sequence is far from random if it has low prefix-free Kolmogorov complexity: Definition 4.7.
A sequence A is K -trivial if there is a constant c suchthat for all n ∈ ω , K ( A ↾ n ) ≤ K (0 n ) + c. In other words, the K -trivials are the elements of the least K -degree.The second property states that a sequence is far from random if itis not capable of signficantly reducing the Kolmogorov complexity ofany string, making it a lowness notion: Definition 4.8. (Muchnik, unpublished) A sequence A is low for K ifthere is a constant c such that for all strings σ , K ( σ ) ≤ K A ( σ ) + c. We observe that A is low for K exactly when A ≤ LK ∅ . Lownesswas also defined in two other ways that are more generalizable to otherrandomness notions. Definition 4.9.
Let R be a randomness notion; in the second part ofthis definition, we assume that R has a test definition. (1) A sequence A is low for R -randomness if every R -random setis still R -random relative to A (in other words, if A ≤ LR ∅ ). (2) A sequence A is low for R -tests if, for every R -test relative to A h V Ai i i ∈ ω , there is an unrelativized R -test h U i i that covers it;e.g., T i ∈ ω V Ai ⊆ T i ∈ ω U i . We note that, since there is a universal prefix-free machine and a uni-versal Martin-L¨of test, we automatically have that lowness for Martin-L¨of randomness is equivalent to lowness for Martin-L¨of tests.The last standard way in which a sequence is considered weak withrespect to a randomness notion is that of being a base. This definitionof Kuˇcera, which first appeared in [52], is also easily stated in a generalway. It is based on Sacks’ Theorem [87], which, as mentioned in Section3.1, states that if A is not computable, then µ ( { X | X ≥ T A } ) = 0.Here, instead, Kuˇcera states that a set should be far from random ifsomething that computes it can be random relative to it. Definition 4.10.
Let R be a randomness notion. A sequence A isa base for R -randomness if there is some X ≥ T A such that X is R -random relative to A . Most of these lowness notions were proven to coincide by Nies in thecase of Martin-L¨of randomness in [80], when he proved that a sequenceis low for Martin-L¨of randomness exactly when it is low for K . In thesame paper, Nies proved that K -triviality and lowness for Martin-L¨ofare equivalent; the proof that K -triviality implies lowness for K is jointwith Hirschfeldt. Kuˇcera showed in [52] that every sequence that is lowfor Martin-L¨of randomness is a base for Martin-L¨of randomness; thecharacterization was completed by Hirschfeldt, Nies, and Stephan in[41]. We summarize these results here: Theorem 4.11 ([41, 52, 80]) . The following are equivalent for a se-quence A . (1) A is low for Martin-L¨of randomness. (2) A is low for Martin-L¨of tests. (3) A is low for K . (4) A is K -trivial. (5) A is a base for Martin-L¨of randomness. The Turing degrees of these sequences have also been studied in greatdepth, beginning with Solovay’s construction of a noncomputable K -trivial set in the 1970s [94]. Chaitin [16] and then Zambella [107]had shown that all K -trivial sequences are ∆ ; in fact, Zambella usedtechniques based on Solovay’s to show that there is a noncomputable K -trivial c.e. set. In 1999, Kuˇcera and Terwijn proved that there is anoncomputable c.e. set that is low for Martin-L¨of randomness [56], andMuchnik proved that there is a noncomputable c.e. set that is low for K (unpublished). Later, Nies showed that when we consider only thec.e. K -trivial Turing degrees, we get a Σ ideal in the Turing degreesand that the K -trivial degrees form a Turing ideal that is generatedby the c.e. K -trivial degrees, in other words, that the smallest idealcontaining the c.e. K -trivial degrees consists of precisely the K -trivialdegrees [80]. In fact, he also showed that the K -trivial sets form aproper subclass of the superlow sets [80].Of course, these definitions can be discussed in the context of otherrandomness notions. Work on Schnorr lowness was contemporary withthe earliest work on lowness for Martin-L¨of randomness. In [1], Ambos-Spies and Kuˇcera asked whether lowness for Schnorr randomness andlowness for Schnorr tests coincided; since there is no universal Schnorr EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 29 test, the answer is not obviously yes. The first step in solving this ques-tion was taken by Terwijn and Zambella in [98], who characterized theTuring degrees that are low for Schnorr tests in terms of a concept theydefined, computable traceability. This concept requires us to define astandard way to list the finite sets computably. While there are severalways to fix a computable list of the finite sets, here we will say thatthe n th canonical finite set D n contains precisely the natural numbers i such that there is a 1 in the 2 i ’s place in the binary representation of n . Definition 4.12.
A Turing degree d is computably traceable if thereis some order function p such that for each function f ≤ T d , there isa computable function r such that for all n , (1) f ( n ) ∈ D r ( n ) and (2) | D r ( n ) | ≤ p ( n ) ,where D n is the n th canonical finite set.If we replace D r ( n ) with W r ( n ) , we have a c.e. traceable degree. Now we can state Terwijn and Zambella’s result:
Theorem 4.13 (Terwijn and Zambella, [98]) . A Turing degree is lowfor Schnorr tests if and only if it is computably traceable.
The corresponding result on lowness for Schnorr randomness wasproven via contributions from a large number of people. We note herethat every sequence that is low for Schnorr tests must be low for Schnorrrandomness as well. Bedregal and Nies showed that any degree thatis low for Schnorr randomness or for computable randomness must behyperimmune-free [8]; Kjos-Hanssen, Nies, and Stephan showed that asequence A is c.e. traceable exactly when every Schnorr null set relativeto A is Martin-L¨of null as well and then that any hyperimmune-free c.e.traceable degree must be computably traceable [46]. We can combinethese results and see the following: Theorem 4.14 (Kjos-Hanssen, Nies, and Stephan [46]) . The followingare equivalent for a sequence A . (1) A is low for Schnorr randomness. (2) A is low for Schnorr tests. (3) A is computably traceable. Before Schnorr triviality could be considered, it was necessary tocharacterize Schnorr randomness in terms of Kolmogorov complexity.This required a new type of machine that was first defined by Downeyand Griffiths in [24].
Definition 4.15. A computable measure machine is a prefix-free ma-chine M such that the measure of [ dom ( M )] is a computable real. Downey and Griffiths then used computable measure machines tocharacterize Schnorr randomness; the characterization is directly par-allel to that for Martin-L¨of randomness. However, we note that itis inherently more complicated since there is no universal computablemeasure machine.
Theorem 4.16 (Downey and Griffiths [24]) . A sequence A is Schnorrrandom if and only if for every computable measure machine M , thereis a constant c such that for all n , K M ( A ↾ n ) ≥ n − c. Now we can discuss Schnorr triviality. In [24], Downey and Griffithsalso defined a reducibility parallel to ≤ K that is based on computablemeasure machines: Definition 4.17.
Let A and B be sequences. A is Schnorr reducibleto B (written A ≤ Sch B ) if for every computable measure machine M ,there is a computable measure machine N and a constant c such thatfor all n , K N ( A ↾ n ) ≤ K M ( B ↾ n ) + c. In a parallel with K -triviality, Downey and Griffiths defined a se-quence A to be Schnorr trivial if A ≤ Sch ω . Schnorr triviality andlowness for Schnorr were soon seen not to coincide: almost imme-diately after Schnorr triviality was defined, Downey, Griffiths, andLaForte proved that there is a Turing complete Schnorr trivial sequence[20]. Later, Franklin proved that, in fact, every high degree containsa Schnorr trivial sequence [30]. This lets us see immediately that theSchnorr trivial sequences and the Schnorr low sequences do not co-incide; however, Franklin showed that the hyperimmune-free Schnorrtrivial degrees are precisely the degrees that are low for Schnorr [29].We may also remark that the Schnorr trivial degrees do not forman ideal in the Turing degrees: Downey, Griffiths, and LaForte alsoshowed that there is a c.e. degree that contains no Schnorr trivial (or K -trivial) sets [20], so the Schnorr trivial Turing degrees are not closeddownward. However, they did show that their truth-table degrees areclosed downward, and since Franklin and Stephan showed that Schnorrtrivial sequences are closed under join [33], we can at least say that theyform an ideal in the truth-table degrees. For a more detailed discussionof the necessity of choosing the proper reducibility in randomness, seeFranklin’s survey. EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 31
The bases for Schnorr randomness form a different class yet again:Franklin, Stephan, and Yu proved that a sequence is a base for Schnorrrandomness exactly when it cannot compute ∅ ′ [34].Just as Schnorr randomness was newly characterized in terms ofKolmogorov complexity in this period, computable randomness wasnewly characterized in terms of tests in two different ways. Downey,Griffiths, and LaForte developed the concept of a computably gradedtest [20]: Definition 4.18. A computably graded test is a pair ( h V i i i ∈ ω , f ) where h V i i i ∈ ω is a Martin-L¨of test and f : 2 <ω × ω → R is a computablefunction such that the following three conditions hold for all n ∈ ω ,all σ ∈ <ω , and any finite prefix-free set of strings { σ i } such that S [ σ i ] ⊆ [ σ ] : (1) µ ( V n ∩ [ σ ]) ≤ f ( σ, n ) , (2) P f ( σ i , n ) ≤ − n , and (3) P f ( σ i , n ) ≤ f ( σ, n ) .A sequence passes a computably graded test if it passes the Martin-L¨oftest component. Merkle, Mihailovi´c, and Slaman developed another test notion theyused to characterize computable randomness, a bounded Martin-L¨oftest [70]. They began by defining a computable rational probabilitydistribution : a computable function ν : 2 <ω → Q such that ν ( hi ) = 1and ν ( σ ) = ν ( σ
0) + ν ( σ
1) for every σ ∈ <ω . Definition 4.19. A bounded Martin-L¨of test is a Martin-L¨of test h V i i i ∈ ω such that there is a computable rational probability distributionsuch that for all n and σ , µ ( V n ∩ [ σ ]) ≤ − n ν ( σ ) . Theorem 4.20 (Downey, Griffiths, and LaForte [20], Merkle, Mi-hailovi´c, and Slaman [70]) . The following are equivalent for a sequence A . (1) A is computably random. (2) A passes all computably graded tests. (3) A passes all bounded Martin-L¨of tests. There is also a machine characterization of computable randomnessdue to Mihailovi´c; while his work is unpublished, the following defini-tion and theorem appear in Section 7.1.5 of [26].
Definition 4.21. A bounded machine is a prefix-free machine suchthat there is a computable rational probability distribution ν such that for all n and all σ , µ ([ { σ | K M ( σ ) ≤ | σ | − n } ]) ≤ − n ν ( σ ) . Theorem 4.22.
A sequence A is computably random if and only if forevery bounded machine M , there is a constant c such that for all n , K M ( A ↾ n ) ≥ n − c. Now we can turn our attention to weakness with respect to com-putable randomness. The first concept to be addressed was that oflowness, and Nies proved that, once again, we get a different class ofsequences.
Theorem 4.23 (Nies, [80]) . The sets that are low for computable ran-domness are precisely the computable sets.
Lowness for tests for computable randomness has not been stud-ied, nor has any notion of “computable triviality.” However, there isa partial characterization of the bases for computable randomness.Hirschfeldt, Nies, and Stephan have shown that every ∆ sequenceof non-DNC degree is a base for computable randomness but that nosequence of PA degree is [41].Now we turn our attention to randomness notions that are eitherstronger than Martin-L¨of randomness or weaker than Schnorr random-ness. In Wang’s 1996 dissertation, he developed a Martin-L¨of test-likecharacterization and a martingale characterization for Kurtz random-ness [104]; this was followed almost a decade later by a similar char-acterization in terms of Kolmogorov complexity by Downey, Griffithsand Reid [22]. Definition 4.24. [104] A Kurtz null test is a Martin-L¨of test h V i i i ∈ ω such that for some computable function f : ω → (2 <ω ) <ω , V n = [ f ( n )] for all n . Definition 4.25. [25]
A prefix-free machine M is a computably lay-ered machine if there is a computable function f : ω → (2 <ω ) <ω suchthat the following three conditions hold: (1) S i ∈ ω f ( i ) = dom ( M ) , (2) if σ ∈ f ( i + 1) , then there is some τ ∈ f ( i ) such that M ( τ ) (cid:22) M ( σ ) , and (3) if σ ∈ f ( i ) , then | M ( σ ) | = | σ | + i + 1 . The equivalence of (1) and (2) in the following theorem is inherentin Kautz’s thesis [44] but explicitly stated in [104]; the equivalence of(4) to the others appears in [25].
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 33
Theorem 4.26 (Wang [104], Downey, Griffiths, and Reid [25]) . Let A be a sequence. The following are equivalent. (1) A is Kurtz random. (2) A passes all Kurtz null tests. (3) For every computable martingale d and every order function h ,there is some n such that d ( A ↾ n ) ≤ h ( n ) . (4) For every computably layered machine M , there is a constant c such that for all n , K M ( A ↾ n ) ≥ n − c. A comment must be made about lowness for Kurtz tests. Downey,Griffiths, and Reid proved that the degrees that are low for Kurtz testsform a superset of the computably traceable degrees and a subset of thehyperimmune-free degrees [25]; the final characterization would comelater [97, 39].Now we turn our attention to weak 2-randomness. While weak 2-randomness had been briefly studied in the early 1980s by Gaifmanand Snir [38] and even earlier by Solovay, who proved that no weakly2-random sequence is ∆ [94], it was placed into the randomness hierar-chies we know today by Kurtz [55], who defined weak n -randomness forall n ≥
2. Later, weak 2-randomness was considered by Kautz [44] andWang [104], who independently developed another characterization ofit using a test definition similar to that of Martin-L¨of randomness.
Definition 4.27. A generalized Martin-L¨of test is a sequence h V i i i ∈ ω of uniformly c.e. subsets of ω such that lim n →∞ µ ( V n ) = 0 . Theorem 4.28 (Kautz [44], Wang [104]) . A sequence is weakly 2-random if it passes all generalized Martin-L¨of tests.
However, a further study of the properties of weakly random se-quences was not conducted until 2006, when Downey, Nies, Weber,and Yu [22] undertook a systematic study of them. Their main resultis the following:
Theorem 4.29 (Downey, Nies, Weber, and Yu [22]) . The followingare equivalent for a Martin-L¨of random sequence A . (1) A is weakly 2-random. (2) The degree of A and ∅ ′ form a minimal pair. (3) A does not compute any noncomputable c.e. set. This gives us a characterization of the subclass of Martin-L¨of randomsequences that do not compute a noncomputable c.e. set in terms ofa randomness notion just as difference randomness characterizes the subclass of Martin-L¨of random sequences that do not compute a PAdegree [32].Downey, Nies, Weber, and Yu also began the study of lowness forweak 2-randomness by proving that every sequence that is low for weak2-randomness is K -trivial [22]; Kjos-Hanssen, J. Miller, and Solomonproved the converse in [45].Now that we have discussed the Turing degrees of all of the main ran-domness notions, we can make an observation about randomness in thehyperimmune-free Turing degrees: all the main notions of randomnessthat we have seen thus far coincide in this setting. Theorem 4.30 (Nies, Stephan, and Terwijn [82], Yu (unpublished)) . If A has hyperimmune-free degree, then A is Kurtz random if and onlyif A is weakly 2-random. One additional result, also due to Nies, Stephan, and Terwijn [82],and independently, Miller [73], is a characterization of 2-randomnessin terms of plain Kolmogorov complexity. As stated in Section 2.2,one cannot provide a definition of randomness by requiring all initialsegments to be incompressible with respect to plain Kolmogorov com-plexity. Martin-L¨of observed that almost every sequence A satisfiesthe condition that there is some c such that C ( A ↾ n ) ≥ n − c for in-finitely many n ; let us say that such a sequence A is infinitely often C -incompressible . The following result identifies precisely the notionof randomness that corresponds to this condition: Theorem 4.31 (Nies, Stephan, and Terwijn [82], Miller [73]) . For A ∈ ω , A is 2-random if and only if there A is infinitely often C -incompressible. Miller [73] further characterizes 2-randomness in terms of the prop-erty of being infinitely often K -incompressible, which is satisfied by asequence A if there is some c such that K ( A ↾ n ) ≥ n + K ( n ) − c forinfinitely many n .In this period, a randomness notion that was introduced in the late1990s in [79], and which is still not entirely understood, began to receiveinterest: Kolmogorov-Loveland randomness. This notion is defined us-ing a martingale characterization, and the key feature of this charac-terization is that the bits of the sequence do not need to be bet on inorder. This idea has its roots in the notion of Kolmogorov-Lovelandstochasticity [47, 50, 60] and was studied by Muchnik, Semenov, andUspensky [79] and Merkle, J. Miller, Nies, Reimann, and Stephan [71];as in Downey and Hirschfeldt [26], we use the latter group’s notation. EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 35
Let us say that a finite assignment is a sequence of elements ( r i , a i )from ω × { , } such that the r i ’s are all distinct: the r i ’s are theplaces on which bets have been made, and the a i ’s are the bets madeat those places, so this gives a “history” of how much has been bet andat which places. If we then define a scan rule , which is a function thatdetermines the next place to bet on given this “history,” and a stakefunction that tells us how much to bet there, we have a nonmonotonicbetting strategy . We can say that a nonmonotonic betting strategysucceeds on a sequence A if the limsup of the amount one has afterbetting on A following the strategy defined by the scan rule and stakefunction is infinite. Definition 4.32. [79]
A sequence is
Kolmogorov-Loveland random ifno partial computable nonmonotonic betting strategy succeeds on it.
Merkle would later prove that one could get the same class of randomsequences using total computable nonmonotonic betting strategies [69],so it is clear that every Kolmogorov-Loveland random is computablyrandom; Muchnik, Semenov, and Uspensky showed that every Martin-L¨of random sequence is Kolmogorov-Loveland random [79]. It has beenconjectured that the Kolmogorov-Loveland random sequences are pre-cisely the Martin-L¨of random sequences. While quite some time andenergy has been put into this question [43], it remains open.4.5.
Effective notions of dimension.
Another significant strand ofresearch in algorithmic randomness that emerged in the early 2000sinvolved effective notions of dimension. Lutz first provided a definitionof effective Hausdorff dimension in [61], given in terms of certain bettingstrategies called s - gales . First, Lutz gave a betting characterization ofclassical Hausdorff dimension in terms of these strategies. Derivedfrom the definition of a martingale given above, an s -gale is a function d : 2 <ω → R ≥ satisfying d ( σ ) = 2 − s ( d ( σ
0) + d ( σ σ ∈ <ω .The set of all sequences on which an s -gale d succeeds (that is, thosesequences A such that d ( A ↾ n ) is unbounded in n ) is written S [ d ] as itis for a standard martingale. Then Lutz proved the following: Theorem 4.33 (Lutz [61]) . For A ⊆ ω , dim H ( A ) = inf { s ∈ Q | A ⊆ S [ d ] for some s -gale d } . For additional characterizations of dimension in terms of s -gales, see[61]. To obtain effective Hausdorff dimension, we simply restrict ourattention to c.e. s -gales, that is, those s -gales that take on values thatare uniformly computable approximable from below. We thus definedim( A ) = inf { s ∈ Q | A ⊆ S [ d ] for some c.e. s -gale d } to be the effective Hausdorff dimension of A ⊆ ω . One can equivalentlydefine the effective Hausdorff dimension in terms of covers; see, forinstance, the treatment in [26, Section 13.5] for details.One key difference between classical Hausdorff dimension and effec-tive Hausdorff dimension is that individual sequences can have positiveeffective dimension, while sequences always have classical dimensionzero. Perhaps the most useful characterization of effective Hausdorffdimension for individual sequences was provided by Mayordomo, whoproved the following: Theorem 4.34 (Mayordomo [68]) . For A ∈ ω , dim( A ) = lim inf n K ( A ↾ n ) n = lim inf n C ( A ↾ n ) n . Another fundamental early result about effective Hausdorff dimen-sion is that for every computable real number r ∈ [0 , A ∈ ω such that dim( A ) = r , a fact established by Lutz in [61].A similar characterization of an effective version of the classical no-tion of packing dimension can be given in terms of c.e. s -gales and initialsegment complexity [2]. For the sake of brevity, we only highlight thelatter. As shown by Athreya, Hitchcock, Lutz, and Mayordomo [2], theeffective packing dimension of a sequence A ∈ ω , written Dim( A ), canbe characterized asDim( A ) = lim sup n K ( A ↾ n ) n = lim sup n C ( A ↾ n ) n . A number of computability theoretic questions about the notions ofeffective dimension have been investigated since these definitions firstappeared, particularly in the context of the Turing degrees. The mostsignificant work in this respect has been on the broken dimension prob-lem: Given α ∈ (0 , A such that dim( A ) = α and for every B ≤ T A , dim( B ) ≤ α ? This question was answered in theaffirmative by J. Miller, who showed that such a sequence A exists andthat it can even be chosen to be ∆ [75]. Another important result dueto Staiger [95] and, independently, Hitchcock [42], the correspondenceprinciple for effective dimension, establishes conditions under which ef-fective Hausdorff dimension and classical Hausdorff dimension agree:For any countable union U of Π classes, dim H ( U ) = dim( U ).Lastly, we mention several recent uses of effective dimension in estab-lishing new results on classical Hausdorff dimension. These applicationsmake use of the so-called point-to-set principle due to J. Lutz and N.Lutz [62], which allows one to calculate the classical Hausdorff dimen-sion of a subset E of Euclidean space in terms of the relativizations of EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 37 the effective dimensions of members of E . That is, for E ⊆ R n ,dim H ( E ) = min A ∈ ω sup x ∈ E dim A ( x ) , where, for A ∈ ω and x ∈ R n , dim A ( x ) is the effective dimension of x relative to the oracle A (which can be defined in terms of K A , prefix-freeKolmogorov complexity relative to A ). Using the point-to-set principle,N. Lutz and Stull [64] provided a new lower bound for the classicaldimension of generalized Furstenberg sets, and N. Lutz [63] used theprinciple to extend an inequality bounding the dimension of certainintersections of sets, known to hold for Borel subsets of Euclidean space,to all subsets of Euclidean space.For more results on effective notions of dimension, see [26, Chapter13]. 5. Recent developments
Since the publication of the two monographs due to Nies [81] andDowney and Hirschfeldt [26], a majority of research in algorithmicrandomness (at least outside of the setting of resource-bounded ran-domness) has involved notions of randomness that fall between Kurtzrandomness and 2-randomness when measured in terms of strength.One exception is UD-randomness, introduced by Avigad [3] and fur-ther developed by Calvert and Franklin [14]. Based on the concept ofuniform distribution studied by Weyl [106], UD-randomness is impliedby Schnorr randomness but incomparable with Kurtz randomness, asshown by Avigad.In particular, there has been a great deal of interest in notions evenstronger than Martin-L¨of randomness. Demuth randomness and weakDemuth randomness have already been discussed. As mentioned inSection 4.1, in 2011, Franklin and Ng introduced the notion of differencerandomness and showed that the sequences that are difference randomare precisely the Turing incomplete Martin-L¨of random sequences [32].Combined with Theorem 4.2 due to Stephan, it follows that a Martin-L¨of random sequence is difference random if and only if it does notcompute a completion of Peano arithmetic. Since then, a multitude ofnotions, including balanced randomness [28], Oberwolfach randomness[11], and density randomness [78], has been studied. Some of themhave been used to characterize subclasses of the Martin-L¨of randomsequences in terms of computational strength, while others have provenuseful for researchers who wish to relate algorithmic randomness tocomputable analysis [78].
One of the primary achievements of recent work on the interactionbetween algorithmic randomness and computable analysis has been toprovide characterizations of various notions of randomness in termsof almost everywhere behavior from classical mathematics, a researchproject that began in the work of Demuth but was only recently in-dependently rediscovered. Given a theorem of the form “For almostevery x , x has property P ,” we can often replace the property P witha computably defined analogue P ∗ and prove a theorem of the form “ x is R -random if and only if x has the property P ∗ ”, where R is somenotion of effective randomness. This has been done for theorems suchas the Lebesgue differentiation theorem [12, 78] and Birkhoff’s ergodictheorem and the Poincar´e recurrence theorem [103, 37, 31, 10, 35].Many of the most significant recent results involving computable ran-domness and Schnorr randomness have been achieved in this area. Thesurvey by Hoyrup on layerwise computability, the survey by Rute onthe relationship between algorithmic randomness and analysis, and thesurvey by Towsner on ergodic theory and randomness in this volumeaddress the ongoing research in these areas.6. Acknowledgments
The authors would like to thank the referees who provided extraor-dinarily perceptive and useful comments on this survey.
References [1] Klaus Ambos-Spies and Anton´ın Kuˇcera. Randomness in computability the-ory. In
Computability theory and its applications (Boulder, CO, 1999) , volume257 of
Contemp. Math. , pages 1–14. Amer. Math. Soc., Providence, RI, 2000.[2] Krishna B. Athreya, John M. Hitchcock, Jack H. Lutz, and Elvira Mayor-domo. Effective strong dimension in algorithmic information and computa-tional complexity.
SIAM Journal on Computing , 37(3):671–705, 2007.[3] Jeremy Avigad. Uniform distribution and algorithmic randomness.
J. Sym-bolic Logic , 78(1):334–344, 2013.[4] George Barmpalias. Algorithmic randomness and measures of complexity.
Bulletin of Symbolic Logic , 19(3):318–350, 2013.[5] George Barmpalias and Andrew E. M. Lewis. Chaitin’s halting probabilityand the compression of strings using oracles.
Proc. R. Soc. Lond. Ser. A Math.Phys. Eng. Sci. , 467(2134):2912–2926, 2011.[6] George Barmpalias, Andrew E. M. Lewis, and Mariya Soskova. Randomness,lowness and degrees.
J. Symbolic Logic , 73(2):559–577, 2008.[7] George Barmpalias and Charlotte S. Vlek. Kolmogorov complexity of initialsegments of sequences and arithmetical definability.
Theoretical ComputerScience , 412(41):5656–5667, 2011.
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 39 [8] Benjam´ın Ren´e Callejas Bedregal and Andr´e Nies. Lowness properties of re-als and hyper-immunity.
Electronic Notes in Theoretical Computer Science ,84:73–79, 2003.[9] Laurent Bienvenu, Adam Day, Ilya Mezhirov, and Alexander Shen. Ergodic-type characterizations of Martin-L¨of randomness. In , volume 6158 of
Lecture Notes in Comput.Sci. , pages 49–58. Springer, Berlin, 2010.[10] Laurent Bienvenu, Adam R. Day, Mathieu Hoyrup, Ilya Mezhirov, andAlexander Shen. A constructive version of Birkhoff’s ergodic theorem forMartin-L¨of random points.
Inform. and Comput. , 210:21–30, 2012.[11] Laurent Bienvenu, Noam Greenberg, Anton´ın Kuˇcera, Andr´e Nies, and DanTuretsky. Coherent randomness tests and computing the K -trivial sets. J.Eur. Math. Soc. (JEMS) , 18(4):773–812, 2016.[12] Vasco Brattka, Joseph S. Miller, and Andr´e Nies. Randomness and differen-tiability.
Trans. Amer. Math. Soc. , 368(1):581–605, 2016.[13] Cristian S. Calude, Peter H. Hertling, Bakhadyr Khoussainov, and YonggeWang. Recursively enumerable reals and chaitin ω numbers. Theoretical Com-puter Science , 255(1-2):125–149, 2001.[14] Wesley Calvert and Johanna N. Y. Franklin. Genericity and UD-random reals.
J. Log. Anal. , 7:Paper 4, 10, 2015.[15] Gregory J. Chaitin. A theory of program size formally identical to informationtheory.
J. Assoc. Comput. Mach. , 22:329–340, 1975.[16] Gregory J. Chaitin. Algorithmic information theory.
IBM J. Res. Develop. ,21(4):350–359, 1977.[17] Peter Cholak, Noam Greenberg, and Joseph S. Miller. Uniform almost every-where domination.
The Journal of Symbolic Logic , 71(3):1057–1072, 2006.[18] S. Barry Cooper.
Computability theory . Chapman & Hall/CRC, Boca Raton,FL, 2004.[19] Barbara Csima and Antonio Montalb´an. A minimal pair of k -degrees. Pro-ceedings of the American Mathematical Society , 134(5):1499–1502, 2006.[20] Rod Downey, Evan Griffiths, and Geoffrey LaForte. On Schnorr and com-putable randomness, martingales, and machines.
Math. Log. Q. , 50(6):613–627, 2004.[21] Rod Downey, Denis R. Hirschfeldt, Andr´e Nies, and Sebastiaan A. Terwijn.Calibrating randomness.
Bull. Symbolic Logic , 12(3):411–491, 2006.[22] Rod Downey, Andr´e Nies, Rebecca Weber, and Liang Yu. Lowness and Π nullsets. J. Symbolic Logic , 71(3):1044–1052, 2006.[23] Rod G. Downey, Denis R. Hirschfeldt, Andr´e Nies, and Frank Stephan. Trivialreals. In
Proceedings of the 7th and 8th Asian Logic Conferences , pages 103–131, Singapore, 2003. Singapore Univ. Press.[24] Rodney G. Downey and Evan J. Griffiths. Schnorr randomness.
J. SymbolicLogic , 69(2):533–554, 2004.[25] Rodney G. Downey, Evan J. Griffiths, and Stephanie Reid. On Kurtz ran-domness.
Theoret. Comput. Sci. , 321(2-3):249–270, 2004.[26] Rodney G. Downey and Denis R. Hirschfeldt.
Algorithmic Randomness andComplexity . Springer, 2010.[27] Kenneth Falconer.
Fractal geometry . John Wiley & Sons, Ltd., Chichester,third edition, 2014. Mathematical foundations and applications. [28] Santiago Figueira, Denis Hirschfeldt, Joseph S. Miller, Keng Meng Ng, andAndr´e Nies. Counting the changes of random ∆ sets. In Programs, proofs,processes , volume 6158 of
Lecture Notes in Comput. Sci. , pages 162–171.Springer, Berlin, 2010.[29] Johanna N.Y. Franklin. Hyperimmune-free degrees and Schnorr triviality.
J.Symbolic Logic , 73(3):999–1008, 2008.[30] Johanna N.Y. Franklin. Schnorr trivial reals: A construction.
Arch. Math.Logic , 46(7–8):665–678, 2008.[31] Johanna N.Y. Franklin, Noam Greenberg, Joseph S. Miller, and Keng MengNg. Martin-L¨of random points satisfy Birkhoff’s ergodic theorem for effec-tively closed sets.
Proc. Amer. Math. Soc. , 140(10):3623–3628, 2012.[32] Johanna N.Y. Franklin and Keng Meng Ng. Difference randomness.
Proc.Amer. Math. Soc. , 139(1):345–360, 2011.[33] Johanna N.Y. Franklin and Frank Stephan. Schnorr trivial sets and truth-table reducibility.
J. Symbolic Logic , 75(2):501–521, 2010.[34] Johanna N.Y. Franklin, Frank Stephan, and Liang Yu. Relativizations ofrandomness and genericity notions.
Bull. Lond. Math. Soc. , 43(4):721–733,2011.[35] Johanna N.Y. Franklin and Henry Towsner. Randomness and non-ergodicsystems.
Mosc. Math. J. , 14(4):711–744, 2014.[36] P´eter G´acs. Every sequence is reducible to a random one.
Inform. and Control ,70(2-3):186–192, 1986.[37] Peter G´acs, Mathieu Hoyrup, and Crist´obal Rojas. Randomness on com-putable probability spaces—a dynamical point of view.
Theory Comput. Syst. ,48(3):465–485, 2011.[38] Haim Gaifman and Marc Snir. Probabilities over rich languages, testing andrandomness.
J. Symbolic Logic , 47(3):495–548, 1982.[39] Noam Greenberg and Joseph S. Miller. Lowness for Kurtz randomness.
J.Symbolic Logic , 74(2):665–678, 2009.[40] Felix Hausdorff. Dimension und ¨außeres Maß.
Math. Ann. , 79(1-2):157–179,1918.[41] Denis R. Hirschfeldt, Andr´e Nies, and Frank Stephan. Using random sets asoracles.
J. Lond. Math. Soc. (2) , 75(3):610–622, 2007.[42] John M. Hitchcock. Correspondence principles for effective dimensions.
The-ory of Computing Systems , 38(5):559–571, 2005.[43] Bart Kastermans and Steffen Lempp. Comparing notions of randomness.
The-oret. Comput. Sci. , 411(3):602–616, 2010.[44] Steven M. Kautz.
Degrees of random sets . PhD thesis, Cornell University,1991.[45] Bjørn Kjos-Hanssen, Joseph S. Miller, and Reed Solomon. Lowness notions,measure and domination.
J. Lond. Math. Soc. (2) , 85(3):869–888, 2012.[46] Bjørn Kjos-Hanssen, Andr´e Nies, and Frank Stephan. Lowness for the classof Schnorr random reals.
SIAM J. Comput. , 35(3):647–657, 2005.[47] A. N. Kolmogorov. On tables of random numbers.
Sankhy¯a Ser. A , 25:369–376, 1963.[48] A. N. Kolmogorov. Three approaches to the quantitative definition of infor-mation.
Internat. J. Comput. Math. , 2:157–168, 1968.
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 41 [49] A. N. Kolmogorov. To the logical foundations of the theory of informationand probability theory. In A.N. Shiryayev, editor,
Selected Works of A.N.Kolmogorov, Vol. III , volume 27 of
Mathematics and Its Application (SovietSeries) . Springer, 1993.[50] A. N. Kolmogorov. On tables of random numbers.
Theoret. Comput. Sci. ,207(2):387–395, 1998. Reprinted from Sankhy¯a Ser. A -classes and complete extensions of PA. In Re-cursion theory week (Oberwolfach, 1984) , volume 1141 of
Lecture Notes inMath. , pages 245–259. Springer, Berlin, 1985.[52] Anton´ın Kuˇcera. On relative randomness.
Ann. Pure Appl. Logic , 63(1):61–67, 1993. 9th International Congress of Logic, Methodology and Philosophyof Science (Uppsala, 1991).[53] Anton´ın Kuˇcera, Andr´e Nies, and Christopher P. Porter. Demuth’s path torandomness.
Bulletin of Symbolic Logic , 21(3):270–305, 2015.[54] Antonin Kucera and Theodore Slaman. Randomness and recursive enumer-ability.
SIAM Journal on Computing , 31(1):199–211, 2001.[55] Stuart Alan Kurtz.
Randomness and genericity in the degrees of unsolvability .PhD thesis, University of Illinois, 1981.[56] Anton´ın Kuˇcera and Sebastiaan A. Terwijn. Lowness for the class of randomsets.
J. Symbolic Logic , 64(4):1396–1402, 1999.[57] Henri Lebesgue.
Int´egrale, longueur, aire . PhD thesis, Universit´e de Paris,1902.[58] L. A. Levin. The concept of a random sequence.
Dokl. Akad. Nauk SSSR ,212:548–550, 1973.[59] Paul L´evy.
T´eorie de l’Addition des Variables Al´eatoires . Gauthier-Villars,Paris, 1937.[60] Donald Loveland. A new interpretation of the von Mises’ concept of randomsequence.
Z. Math. Logik Grundlagen Math. , 12:279–294, 1966.[61] Jack H. Lutz. The dimensions of individual strings and sequences.
Inform.and Comput. , 187(1):49–79, 2003.[62] Jack H. Lutz and Neil Lutz. Algorithmic information, plane kakeya sets, andconditional dimension.
ACM Transactions on Computation Theory (TOCT) ,10(2):7, 2018.[63] Neil Lutz. Fractal intersections and products via algorithmic dimension. In . Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,2017.[64] Neil Lutz and Donald M. Stull. Bounding the dimension of points on a line.In
International Conference on Theory and Applications of Models of Com-putation , pages 425–439. Springer, 2017.[65] Donald A. Martin. Classes of recursively enumerable sets and degrees of un-solvability.
Z. Math. Logik Grundlagen Math. , 12:295–310, 1966.[66] Per Martin-L¨of. The definition of random sequences.
Information and Control ,9:602–619, 1966.[67] Per Martin-L¨of. Complexity oscillations in infinite binary sequences.
Proba-bility Theory and Related Fields , 19(3):225–230, 1971. [68] Elvira Mayordomo. A Kolmogorov complexity characterization of construc-tive hausdorff dimension.
Information Processing Letters , 84(1):1–3, 2002.[69] Wolfgang Merkle. The Kolmogorov-Loveland stochastic sequences are notclosed under selecting subsequences.
J. Symbolic Logic , 68(4):1362–1376,2003.[70] Wolfgang Merkle, Nenad Mihailovi´c, and Theodore A. Slaman. Some resultson effective randomness.
Theory Comput. Syst. , 39(5):707–721, 2006.[71] Wolfgang Merkle, Joseph S. Miller, Andr´e Nies, Jan Reimann, and FrankStephan. Kolmogorov-Loveland randomness and stochasticity.
Ann. PureAppl. Logic , 138(1-3):183–210, 2006.[72] Wolfgang Merkle and Frank Stephan. On C-degrees, H-degrees and T-degrees.In
Computational Complexity, 2007. CCC’07. Twenty-Second Annual IEEEConference on , pages 60–69. IEEE, 2007.[73] Joseph S. Miller. Every 2-random real is Kolmogorov random.
J. SymbolicLogic , 69(3):907–913, 2004.[74] Joseph S. Miller. The K -degrees, low for K -degrees, and weakly low for K sets. Notre Dame J. Form. Log. , 50(4):381–391 (2010), 2009.[75] Joseph S. Miller. Extracting information is hard: a Turing degree of non-integral effective Hausdorff dimension.
Advances in Mathematics , 226(1):373–384, 2011.[76] Joseph S. Miller and Liang Yu. On initial segment complexity and degrees ofrandomness.
Trans. Amer. Math. Soc. , 360(6):3193–3210, 2008.[77] Webb Miller and D. A. Martin. The degrees of hyperimmune sets.
Z. Math.Logik Grundlagen Math. , 14:159–166, 1968.[78] Kenshi Miyabe, Andr´e Nies, and Jing Zhang. Using almost-everywhere the-orems from analysis to study randomness.
Bull. Symb. Log. , 22(3):305–331,2016.[79] Andrei A. Muchnik, Alexei L. Semenov, and Vladimir A. Uspensky. Mathe-matical metaphysics of randomness.
Theoret. Comput. Sci. , 207(2):263–317,1998.[80] Andr´e Nies. Lowness properties and randomness.
Adv. Math. , 197(1):274–305,2005.[81] Andr´e Nies.
Computability and Randomness . Clarendon Press, Oxford, 2009.[82] Andr´e Nies, Frank Stephan, and Sebastiaan A. Terwijn. Randomness, rela-tivization and Turing degrees.
J. Symbolic Logic , 70(2):515–535, 2005.[83] Piergiorgio Odifreddi.
Classical Recursion Theory . Number 125 in Studies inLogic and the Foundations of Mathematics. North-Holland, 1989.[84] Piergiorgio Odifreddi.
Classical Recursion Theory, Volume II . Number 143 inStudies in Logic and the Foundations of Mathematics. North-Holland, 1999.[85] Christopher P. Porter. Kolmogorov on the role of randomness in probabilitytheory.
Math. Structures Comput. Sci. , 24(3):e240302, 17, 2014.[86] C. A. Rogers.
Hausdorff measures . Cambridge University Press, London-NewYork, 1970.[87] Gerald E. Sacks.
Degrees of Unsolvability . Princeton University Press, 1963.[88] C.P. Schnorr. A unified approach to the definition of random sequences.
Math.Systems Theory , 5:246–258, 1971.[89] C.P. Schnorr.
Zuf¨alligkeit und Wahrscheinlichkeit , volume 218 of
LectureNotes in Mathematics . Springer-Verlag, Heidelberg, 1971.
EY DEVELOPMENTS IN ALGORITHMIC RANDOMNESS 43 [90] Stephen G. Simpson. Almost everywhere domination and superhighness.
MLQ Math. Log. Q. , 53(4-5):462–482, 2007.[91] Robert I. Soare.
Turing Computability: Theory and Applications . Theory andApplications of Computability. Springer, 2016.[92] Ray J. Solomonoff. A formal theory of inductive inference. part i.
Informationand control , 7(1):1–22, 1964.[93] Ray J. Solomonoff. A formal theory of inductive inference. part ii.
Informationand control , 7(2):224–254, 1964.[94] Robert M. Solovay. Draft of a paper (or series of papers) on Chaitin’s work.Unpublished manuscript, May 1975.[95] Ludwig Staiger. A tight upper bound on Kolmogorov complexity and uni-formly optimal prediction.
Theory of Computing Systems , 31(3):215–229,1998.[96] Frank Stephan. Martin-L¨of Random and PA-complete Sets. Technical Re-port 58, Matematisches Institut, Universit¨at Heidelberg, Heidelberg, 2002.[97] Frank Stephan and Liang Yu. Lowness for weakly 1-generic and Kurtz-random. In
Theory and applications of models of computation , volume 3959of
Lecture Notes in Comput. Sci. , pages 756–764. Springer, Berlin, 2006.[98] Sebastiaan A. Terwijn and Domenico Zambella. Computational randomnessand lowness.
J. Symbolic Logic , 66(3):1199–1205, 2001.[99] A. M. Turing. On Computable Numbers, with an Application to the Entschei-dungsproblem.
Proc. London Math. Soc. , S2-42(1):230, 1937.[100] A. M. Turing. On Computable Numbers, with an Application to the Entschei-dungsproblem. A Correction.
Proc. London Math. Soc. , S2-43(6):544, 1937.[101] Michiel van Lambalgen. The axiomatization of randomness.
J. Symbolic Logic ,55(3):1143–1167, 1990.[102] Jean Ville. ´Etude critique de la notion de collectif . Monographies des Prob-abilit´es. Calcul des Probabilit´es es ses Applications. Gauthier-Villars, Paris,1939.[103] V. V. V ′ yugin. Effective convergence in probability, and an ergodic theoremfor individual random sequences. Teor. Veroyatnost. i Primenen. , 42(1):35–50, 1997.[104] Yongge Wang.
Randomness and complexity . PhD thesis, University of Hei-delberg, 1996.[105] Yongge Wang. A separation of two randomness concepts.
Inf. Process. Lett. ,69(3):115–118, 1999.[106] Hermann Weyl. ¨Uber die Gleichverteilung von Zahlen mod. Eins.