aa r X i v : . [ m a t h . OA ] A p r Submitted to the Annals of Probability
A KHINTCHINE DECOMPOSITION FOR FREEPROBABILITY.
By John D. Williams
Let µ be a probability measure on the real line. In this paper weprove that there exists a decomposition µ = µ ⊞ µ ⊞ · · · ⊞ µ n ⊞ · · · such that µ is infinitely divisible and µ i is indecomposable for i ≥ ⊞ -divisors of a measure µ is compact up to translation. Analogous results are also proven inthe case of multiplicative convolution.
1. Introduction.
In classical probability theory, it has long been knownthat the set of all convolution divisors of a random variable is compact upto translation. That is, given a family of decompositions µ = µ ,i ∗ µ ,i with i ∈ I , the families { µ j,i } i ∈ I,j =1 , can be translated to form sequentially com-pact families { ˆ µ i,j } i ∈ I,j =1 , so that µ = ˆ µ ,i ∗ ˆ µ ,i for all i ∈ I . The proof ofthis result is a simple application of L´evy’s Lemma (see Chapter 5 in [15]for a full account of the classical case). This compactness lemma serves asthe cornerstone for the proof of the following classical result of Khintchine. Theorem . Let µ be a probability measure. Then there exist measures µ i with i = 0 , , , . . . such that µ is ∗ -infinitely divisible, µ i is indecompos-able for i = 1 , , . . . , and µ = µ ∗ µ ∗ µ ∗ · · · . This decomposition is notunique. The equation µ = µ ∗ µ ∗ µ ∗ · · · is in the sense that in the weak ∗ topology we have that lim n ↑∞ µ ∗ µ ∗ · · · ∗ µ n = µ . This type of equalitywill be used throughout the paper without further comment.In free probability theory, the corresponding compactness and decompo-sition theorems have hitherto been absent from the literature. Partial resultsof the corresponding compactness theorem are near trivialities. Indeed, con-sider a W ∗ probability space ( A, τ ) and a random variable X ∈ A with mean0 and finite variance. Let X = X + X be a decomposition with the X i ’sfreely independent and of mean 0. Then the equation τ ( X ) = τ ( X )+ τ ( X )would imply the necessary tightness result when applied to families of de-compositions.It is the first aim of this paper to prove the corresponding tightness resultsin the fullest possible generality. That is, we make no assumptions as to the AMS 2000 subject classifications:
Keywords and phrases: free probability, decomposition, infinite divisibility JOHN D. WILLIAMS finiteness of moments. It is the second aim of this paper to prove versionsof Theorem 1.1 for additive and multiplicative free convolution.This paper is organized as follows: in Section 2 we give the backgroundand terminology of additive free convolution; in Section 3 we state andprove a number of compactness results for families of decompositions withrespect to additive free convolution; in Section 4 we prove the existenceof the Khintchine decomposition with respect to additive free convolution;Sections 5, 6 and 7 are the respective analogues of Sections 2, 3 and 4 butwith regard to multiplicative free convolution for measures supported on thepositive real numbers; in Section 8 we give the background and terminologyfor multiplicative free convolution of measures supported on the unit circle;in Section 9 we prove the existence of the Khintchine decomposition formeasures supported on the unit circle; in Section 10 we provide applicationsof our compactness results.
2. Background and Terminology for Additive Free Convolution.
We refer to [21] for a full account of the basics of free probability theory.Let (
A, τ ) be a W ∗ probability space. We say that a family of unitalsubalgebras { A i } i ∈ I are freely independent if τ ( x i x i · · · x i n ) = 0 for x i j ∈ A i j whenever i j = i j +1 for j = 1 , . . . , n − τ ( x i k ) = 0 for k = 1 , . . . , n .We say that random variables x, y ∈ A are freely independent if the unitalalgebras that they generate in A satisfy the above definition.Assume that A ⊂ B ( H ). We say that a not necessarily bounded operator x is affilitated with A (in symbols, xηA ) if the spectral projections of x areelements in A . Equivalently, xηA if for every y ∈ A ′ (the commutant of A),we have that yx ⊂ xy . This expanded class of random variables allows us tostudy measures with unbounded support.Let x η A be a self-adjoint random variable with distribution µ , a proba-bility measure supported on R . We associate to µ its Cauchy transform: G µ ( z ) = Z R dµ ( t ) z − t = τ (( z − x ) − )Observe that zG µ ( z ) → z → ∞ nontangentially. It follows that G µ isunivalent on a set of the form Γ α,β = { z ∈ C + : ℑ ( z ) > α, ℑ ( z ) > β ℜ ( z ) } for sufficiently large α, β >
0. Throughout this paper we shall refer to a setof this type as a
Stolz angle . The set G µ (Γ α,β ) contains a set of the formΛ α ′ ,β ′ = { z ∈ C − : 0 < ℑ ( z ) ≤ α ′ , β ′ ℜ ( z ) < ℑ ( z ) } on which we have a welldefined left inverse, G − µ . The function R µ ( z ) = G − µ ( z ) − /z is called theR-transform of µ . First proved in [18], the following equality is fundamental HINTCHINE DECOMPOSITION. in free probability theory: R µ ⊞ ν ( z ) = R µ ( z ) + R ν ( z )In what follows, it will be more convenient to consider the following func-tions: F µ ( z ) = 1 G µ ( z ) ϕ µ ( z ) = F − µ ( z ) − z = R µ (1 /z )These functions are refered to as the F and Voiculescu transform, respec-tively. They have the following properties which are proven to various degreesof generality in [10], [18], and [16]:1. | F µ ( z ) − z | = o ( | z | ) uniformly as | z | → ∞ in Γ α,β for all α, β > ℑ ( F µ ( z ) ) ≥ ℑ ( z ) for all z ∈ C + .3. F µ has a well defined left inverse on Γ α,β for some α, β > α, β > ϕ µ ⊞ ν ( z ) = ϕ µ ( z )+ ϕ ν ( z ) when z ∈ Γ α,β .5. F µ ⊞ δ c ( z ) = F µ ( z − c ) and ϕ µ ⊞ δ c = c + ϕ µ ( z ) for c ∈ R .Given a decomposition µ = µ ⊞ µ , it was shown in [20] and [13] thatthere exist analytic subordination functions ω i : C + → C + such that:1. F µ ( z ) = F µ i ( ω i ( z )) for z ∈ C + and i = 1 , y ↑ + ∞ ω i ( iy ) iy = 1 for i = 1 , ω ( z ) + ω ( z ) = z + F µ ( z )Observe that ω i and F µ satisfy the same asymptotic properties in (2)above. A classical result, due to Nevanlinna (whose full account can be foundin [1], Volume 2, page 7), implies that these functions have the followingrepresentation: ω i ( z ) = r i + z + Z ∞−∞ tzz − t dσ i ( t ) F µ ( z ) = r + z + Z ∞−∞ tzz − t dσ ( t )where r, r i ∈ R and σ , σ i are positive, finite measures which are uniquelydetermined by ω i and F µ . Observe that property (3) above and uniquenessimply that r + r = r and σ + σ = σ .We denote by F µ ( t ) = µ (( −∞ , t ]) the cumulative distribution function of µ . This function is used to define two metrics on the space of probability JOHN D. WILLIAMS measures, namely the Kolmogorov and L´evy metric, d ∞ and d respectively.These are defined as follows: d ∞ ( µ, ν ) = sup t ∈ R | F µ ( t ) − F ν ( t ) | d ( µ, ν ) = inf { ǫ > F µ ( t − ǫ ) − ǫ ≤ F ν ( t ) ≤ F µ ( t + ǫ ) + ǫ } The L´evy metric induces the weak topology on the space of probability mea-sures on the line while the Kolmogorov metric induces a stronger topologywhich we call the Kolmogorov topology. We have the the following facts,first proven in [10], which will be used throughout, often without reference:
Lemma . Let µ n and ν n converge to probability measures µ and ν re-spectively in the weak ∗ (resp., Kolmogorov) topology. Then µ n ⊞ ν n convergesto µ ⊞ ν in the weak ∗ (resp., Kolmogorov) topology. The proof of this lemma relies on the following inequalities which will beused in what follows: d ( µ ⊞ ν, µ ′ ⊞ ν ′ ) ≤ d ( µ, µ ′ ) + d ( ν, ν ′ ) d ∞ ( µ ⊞ ν, µ ′ ⊞ ν ′ ) ≤ d ∞ ( µ, µ ′ ) + d ∞ ( ν, ν ′ )The next two lemmas were first proven in Section 5 of [10]. Lemma . Let { µ n } n ∈ N be a tight sequence of measures. Then thereexists a Stolz angle Γ α,β such that the functions | F µ n ( z ) − z | = o ( z ) uniformlyas | z | → ∞ in this set. In particular, the functions F − µ n exist on a commondomain for all n . Lemma . Let { µ n } n ∈ N be a sequence of probability measures on R .The following assertions are equivalent:1. The sequence { µ n } n ∈ N converges in the weak ∗ topology to a probabilitymeasure µ .2. There exist α, β > such that the functions { ϕ µ n } n ∈ N are defined andconverge uniformly on compact subsets of Γ α,β to a function ϕ and ϕ µ n ( z ) = o ( z ) uniformly in n as | z | → ∞ , z ∈ Γ α,β .Moreover, if (1) and (2) are satisfied we have that ϕ = ϕ µ in Γ α,β . HINTCHINE DECOMPOSITION. Definition . A probability measure µ on the real line is said to be ⊞ -infinitely divisible if for every n ∈ N there exists a measure µ /n suchthat µ = µ /n ⊞ · · · ⊞ µ /n , where the measure on the right is the n -fold freeconvolution. In dealing with infinitely divisible measures, the following characteriza-tion, first proven in [8], will prove invaluable.
Theorem . Let { µ i,j } i ∈ N , j =1 ,...,k i be an array of Borel probabilitymeasures on R and { c i } i ∈ N be a sequence of real numbers. Assume that lim i →∞ max j =1 ,...,k i µ i,j ( { t : | t | > ǫ } ) = 0 for all ǫ > and that the measures δ c i ⊞ µ i, ⊞ · · · ⊞ µ i,k i converge to a probability measure µ in the weak ∗ topology.Then µ is ⊞ -infinitely divisible. Definition . Let µ be a probability measure. A decomposition µ = ν ⊞ ρ is said to be nontrivial if neither ν nor ρ is a Dirac mass. We say thata measure µ is indecomposable if it has no nontrivial decomposition. Such measures were studied extensively in [3], [2] and [12]. We close witha theorem, first proven in [10] and [3] from which we derive a corollary thatwill play a key role in the proof of Theorem 4.4.
Theorem . Let µ and ν be two Borel probability measures on R ,neither of them a Dirac mass. Then1. The point a ∈ R is an atom of the measure µ ⊞ ν if and only if thereexist points b, c ∈ R such that a = b + c and µ ( { b } ) + ν ( { c } ) > .Moreover, ( µ ⊞ ν )( { a } ) = µ ( { b } ) + ν ( { c } ) − .2. The absolutely continuous part of µ ⊞ ν is always nonzero and itsdensity is analytic wherever positive and finite. More precisely, thereexists an open set U ⊆ R so that the density function f ( x ) = d ( µ ⊞ ν ) ac ( x ) dx with respect to Lebesgue measure is locally analytic on the set U and ( µ ⊞ ν ) ac ( R ) = R U f ( x ) dx .3. The singular continuous part of µ ⊞ ν is zero Corollary . Let µ and ν be as above. There exists a point s ∈ R such that the cumulative distribution function F µ ⊞ ν is continuous andincreasing in a neighborhood of s . Proof.
First observe that (1) implies that µ ⊞ ν has only finitely manypoint masses. To see this, assume that a = a + a and b = b + b are point JOHN D. WILLIAMS masses of µ ⊞ ν where( µ ⊞ ν )( { a } ) = µ ( { a } ) + ν ( { a } ) − µ ⊞ ν )( { b } ) = µ ( { b } ) + ν ( { b } ) − a = b . This implies that µ ( { a } ) + µ ( { b } ) ≤ < ν ( { a } ) + ν ( { b } ) so that a = b . This implies that, under these assumptions, thereare at most (1 − ν ( { b } )) − point masses of µ ⊞ ν .Note that the nonatomic part of µ ⊞ ν has mass strictly greater than 0.To see this, let { x i } ni =1 be the set of point masses of µ ⊞ ν . Let y and { z i } ni =1 satisfy y + z i = x i and ν ( y ) + µ ( z i ) − µ ⊞ ν )( x i ) for i = 1 , , . . . , n wherethese points arise as in the previous paragraph. Summing over both sides ofthe equation and recalling that ν ( y ) <
1, we have that n X i =1 ( µ ⊞ ν )( x i ) = nν ( y ) − n + n X i =1 µ ( z i ) < n − n + µ ( R ) = 1Thus, for U as in the previous theorem, pick an open subset V ⊆ U thatcontains no point masses. This set satisfies our claim.
3. Compactness Results for Additive Free Convolution.
We be-gin our investigation with a technical lemma.
Lemma . Let µ be a probability measure on R . Let Ω denote a Stolzangle on which F − µ is defined. If µ = µ ⊞ µ is any decomposition of µ ,then ϕ µ and ϕ µ have analytic extensions to Ω . These extensions satisfy ℑ ( ϕ µ ( z )) and ℑ ( ϕ µ ( z )) ≤ for all z ∈ Ω Proof.
By assumption, ϕ µ exists and is analytic on all of Ω and, since F µ increases the imaginary part, ϕ µ ( z ) ≤ z ∈ Ω.Turning our attention to µ consider the subordination function ω satis-fying F µ ( z ) = F µ ( ω ( z )) for z ∈ C + . Recall thatlim y ↑∞ F µ ( iy ) iy = lim y ↑∞ F µ ( iy ) iy = lim y ↑∞ ω ( iy ) iy = 1These facts imply that on a sufficiently small Stolz angle, all three func-tions are invertible and we have the following: HINTCHINE DECOMPOSITION. ω ◦ F − µ = F − µ Since the left hand side is defined on Ω, the right hand side must alsoextend to Ω. This implies that this implies that the Voiculesu transform of µ extends to Ω and, by abuse of notation, we continue to call this extension ϕ µ With regard to the negativity the imaginary part of our analytic exten-sion, note that on a large enough Stolz angle, F µ acts as a left inverse for F − µ and ω ◦ F − µ = F − µ . Thus, F µ ( ω ( F − µ ( z ))) = z . As the left hand side ofthe equation is defined and analytic for all z ∈ Ω, by analytic continuation,the same equality holds for all z ∈ Ω. Thus, ϕ µ ( z ) = ω ( F − µ ( z )) − z = ω ( F − µ ( z )) − F µ ( ω ( F − µ ( z )))for all z ∈ Ω. As F µ increases the imaginary part, our result holds.With this preliminary result out of the way, we now begin proving tight-ness results. The diameter of a subset σ ⊂ R is defined as is usual :diam( σ ) = sup x,y ∈ σ | x − y | . The support of a measure µ (in symbols, supp( µ ))is the complement of the largest open µ -null set. Theorem . Let µ be a probability measure with compact support andconsider a decomposition µ = µ ⊞ µ . Then diam ( supp ( µ i )) ≤ diam ( supp ( µ )) with equality if and only if one of the µ i is a Dirac mass. Proof.
Consider the subordination functions ω i satisfying F µ ( z ) = F µ i ( ω i ( z ))with Nevallina representations: ω i ( z ) = r i + z + Z ztt − z dσ i ( t ) F µ ( z ) = r + z + Z ztt − z dσ ( t )Let α, β ∈ R satisfy supp( µ ) ⊆ [ α, β ], where the interval on the rightside is the smallest for which this containment holds. Observe that G µ hasa nonzero real analytic continuation across ( −∞ , α ) so that the same musthold for F µ . This implies that σ (( −∞ , α )) = 0. Since σ = σ + σ , we alsohave that σ i ( −∞ , α ) = 0 so that, by the Schwartz Lemma, ω i admits analyticcontinuation across ( −∞ , α ). Furthermore, ω i is increasing on ( −∞ , α ) sothat F µ i = F µ ◦ ω − i has an analytic continuation to ω i ( −∞ , α ). This tellsus that supp( µ i ) ⊂ R \ ω (( −∞ , α )). JOHN D. WILLIAMS
Now, observe that ω i ( x ) − x → r i − m i as x → ±∞ where m i is the firstmoment of σ i . Differentiating the Nevanlinna representation of ω i , it is clearthat ω ′ i ( x ) ≥ x < α . Thus, ω i ( α − ǫ ) = Z α − ǫx ω ′ i ( t ) dt + ω i ( x ) ≥ Z α − ǫx dt + x +( ω i ( x ) − x ) → α − ǫ + r i − m i It follows that ( −∞ , α + r i − m i ) ⊆ ω i (( −∞ , α )). Similarly, ( β + r i − m i , ∞ ) ⊆ ω i ( β, ∞ ). These two observations imply that supp( µ i ) ⊆ [ α + r i − m , β + r i − m i ]. Hence, we have that diam(supp( µ i )) ≤ diam(supp( µ )).With regard the equality claim, observe that our measure σ = 0 impliesthat µ is a translation of µ . This implies that µ is a Dirac mass. Thus, byassuming that neither µ nor µ is a Dirac mass, we have that σ i = 0 for i = 1 ,
2. This implies that ω ′ i ( t ) > t < α . It follows that supp( µ i ) $ [ α + r i − m , β + r i − m i ], and our claim follows.In what follows, for O ⊂ R , we let conv( O ) be the smallest interval con-taining the set O . Lemma . Let µ and µ be probability measures with compact support.Then supp ( µ ⊞ µ ) ⊆ conv ( supp ( µ ) + supp ( µ )) . Proof.
Let x and x be freely independent random variables in a W ∗ probability space ( A, τ ) with respective distributions µ and µ . Let c i =inf { t ∈ σ ( x i ) } and d i = sup { t ∈ σ ( x i ) } . It is precisely the content of Theo-rem 4 .
16 in [10] that x − c I + x − c I is a positive random variable. Thus,its spectrum is contained in the positive real numbers. Since the spectrumof a self-adjoint operator contains the support of its distribution, we havethat the distribution of x − c I + x − c I is supported in the positive reals.Similarly, the distribution of x − d I + x − d I is supported in the negativereals. Thus, supp( µ ⊞ µ ) ⊆ [ c + c , d + d ], which is equivalent to ourclaim.We now extend the above theorem to measures with unbounded support.For a measure µ , recall that F µ denotes its cumulative distribution function.We shall let Ω ǫ ( µ ) = { t ∈ R : ǫ < F µ ( t ) < − ǫ } . Theorem . Let µ = µ ⊞ µ . For ǫ > we have that Ω ǫ ( µ ) ⊆ Ω ǫ/ ( µ ) + Ω ǫ/ ( µ ) . HINTCHINE DECOMPOSITION. Proof.
Let a i and b i denote the left and right endpoints of Ω ǫ/ ( µ i ).Consider the probability measures µ i,ǫ/ defined as follows: µ i,ǫ/ ( σ ) = µ i ( σ ∩ Ω ǫ/ ( µ i ))+ (cid:18) − µ i (Ω ǫ/ ( µ i ))2 (cid:19) δ a i ( σ )+ (cid:18) − µ i (Ω ǫ/ ( µ i ))2 (cid:19) δ b i ( σ )Observe that d ∞ ( µ i , µ i,ǫ/ ) ≤ ǫ/ d ∞ denotes the Kolmogorov met-ric. Further observe that supp( µ i,ǫ/ ) = Ω ǫ/ ( µ i ). It follows that d ∞ ( µ, µ ,ǫ/ ⊞ µ ,ǫ/ ) ≤ d ∞ ( µ , µ ,ǫ/ ) + d ∞ ( µ , µ ,ǫ/ ) ≤ ǫ Observe that F µ ( t ) ∈ ( ǫ, − ǫ ) implies that F µ ,ǫ/ ⊞ µ ,ǫ/ ( t ) ∈ (0 , ǫ ( µ ) ⊆ supp( µ ,ǫ/ ⊞ µ ,ǫ/ ). By Lemma 3.3, we have that supp( µ ,ǫ/ ⊞ µ ,ǫ/ )) ⊂ conv(supp( µ ,ǫ/ )+supp( µ ,ǫ/ )) = conv(Ω ǫ/ ( µ )+Ω ǫ/ ( µ )).We close with the main result of the section. Observe that this theoremlacks the quantitative information found in Theorem 3.4. The hope wasto extend Theorem 3.2 in a similar manner, but such an approach provedelusive. We have found no negative results in this direction so we conjecturethat Ω ǫ ( µ i ) ⊆ x + Ω ǫ/ ( µ ) for some x ∈ R . However, the theorem belowprovides us with tightness and will suffice for the applications that follow.Let ν be a measure satisfying 0 < ν ( R ) ≤
1. We extend the definitionof the Cauchy and F -transform by letting G ν ( z ) = R R ( z − t ) − dν ( t ) and F ν ( z ) = 1 /G ν ( z ) . Observe that for λ = ν ( R ), the measure ˆ ν = λ − ν is in facta probability measure. This provides us with the following inequality whichwe shall exploit in what follows. ℑ F ν ( z ) = λ − ℑ F ˆ ν ( z ) ≥ λ − ℑ ( z ) Theorem . Let µ = µ ,k ⊞ µ ,k for all k ∈ N . Then there existtranslations { ˆ µ i,k } so that µ = ˆ µ ,k ⊞ ˆ µ ,k and the family of measures { ˆ µ i,k } is tight for i = 1 , . Before embarking on the proof, we remark that there are two ways fortightness to fail. The first is to take an otherwise tight sequence of measuresand translate their support to ±∞ . The second is if the mass of your mea-sures becomes more spread out. Since our theorem assumes away the formercase, the idea of the proof is to show that the latter cannot happen. Wequantify the latter case as follows: a sequence of measures { µ k } k ∈ N cannotbe translated to tightness if and only if there exists a γ ∈ [0 ,
1) such thatlim inf k sup t ∈ R ( µ k ( t − a, t + a )) < γ for all a ∈ R + . JOHN D. WILLIAMS
Proof.
Assume that { µ ,k } k ∈ N is tight which is equivalent to sequentialprecompactness in the weak ∗ topology. As we established in Lemma 3.1, F − µ , F − µ ,k and F − µ ,k extend to a common domain for all k , which we shalldenote by Ω in what follows, on which they satisfy F − µ ( z ) − F − µ ,k ( z ) + z = F − µ ,k ( z ). Recall that, according to Lemma 2.3, weak convergence isequivalent to the uniform convergence of the functions F − µ ,k on compactsubsets of a Stolz angle Γ α,β to a function F satisfying F ( iy ) /iy → y → ∞ . The equation above implies that F − µ ,k is similarly behaved on Γ α,β so that { µ ,k } is also weakly convergent along this subsequence. Thus, { µ ,k } is tight implies the same for { µ ,k } .With that in mind, we may assume, for the sake of contradiction, thatthe family { µ ,k } cannot be translated to form a tight family of measuresalong any subsequence. This implies that there exists a γ ∈ (0 ,
1) such thatlim inf k sup x ∈ R ( µ ,k ( − a + x, a + x )) < γ < a ∈ R + . Passing tosubsequences and possibly renumbering our measures, we may assume thatsup x ∈ R ( µ ,k ( x − k, x + k )) < γ .Now, pick ǫ > − ǫ ) > γ . Let w = ib where b ∈ R + is chosenso that w ∈ Ω and | F − µ ( w ) − w | ≤ ǫ | w | = ǫb . Observe that F − µ ( w ) = F − µ ,k ( w ) + F − µ ,k ( w ) − w implies that ℑ F − µ ,k ( w ) + ℑ F − µ ,k ( w ) ≥ b (2 − ǫ )In Lemma 3.1, we showed that F − µ i,k decreases the imaginary part so that ℑ F − µ ,k ( ib ) ≥ b (1 − ǫ )Further observe that analytic continuation implies that F µ ,k ( F − µ ,k ( z )) = z for all z ∈ Ω so that, in particular, ib = F µ ,k ( F − µ ,k ( ib )).Now, let z k = F − µ ,k ( ib ) and denote by t k the real part of this number (thereal part can vary as wildly as you would like but we will show that this isnot a problem). We decompose µ ,k so that µ ,k = ν ,k + ρ ,k where ν ,k ( R ) = λ k < γ and ρ ,k ([ t k − k, t k + k ]) = 0. A decomposition with these propertiesexists because of the fact that sup x ∈ R ( µ ,k ( x − k, x + k )) < γ . We will usethe last of the above properties to show that | F µ ,k ( z k ) − F ν ,k ( z k ) | →
0. Wewill then use the fact that F ν ,k increases imaginary part in proportion to λ − k to derive a contradiction.Observe that F µ ,k ( z k ) = 1 G ν ,k ( z k ) + G ρ ,k ( z k ) HINTCHINE DECOMPOSITION. and that | G ρ ,k ( z k ) | = | Z R \ ( t k − k,t k + k ) z k − t dρ ,k ( t ) | → k → ∞ . This second fact is clear since ρ ,k is a subprobability measureand, since ℜ ( z k ) = t k , the above integrand converges to 0 uniformly onthe domain of integration as k ↑ ∞ . Now, if lim inf k | G ν ,k ( z k ) | = 0 thenlim sup k | F µ ,k ( z k ) | = ∞ which would contradict the fact that F µ ,k ( z k ) ≡ ib .Thus, we may assume that | G ν ,k ( z k ) | ≥ c >
0. This implies that λ k > | F µ ,k ( z k ) − F ν ,k ( z k ) | = | (( G ν ,k ( z k ) − G µ ,k ( z k )))( G µ ,k ( z k ) G ν ,k ( z k )) − | Observe that the numerator of the right hand side goes to zero since G µ ,k − G ν ,k = G ρ ,k and the denominator is bounded away from zero since | G ν ,k ( z k ) | ≥ c > | G µ ,k ( z k ) | ≡ b − >
0. Thus, | F µ ,k ( z k ) − F ν ,k ( z k ) | → k ↑ ∞ .Recalling the remarks preceding this theorem, we consider the probabilitymeasure ˆ ν ,k = λ − k ν ,k so that F ν ,k ( z k ) = λ − k F ˆ ν ,k ( z k ). We then have b = ℑ F µ ,k ( z k ) = lim k ↑∞ ℑ F µ ,k ( z k ) = lim k ↑∞ λ − k ℑ F ˆ ν ,k ( z k ) ≥ lim k ↑∞ λ − k ℑ ( z k ) ≥ γ − ( b (1 − ǫ )) > b This contradiction completes our proof.We end with a few remarks and corollaries. We single out the followingfact from last theorem for easy reference.
Corollary . Let µ = µ ,k ⊞ µ ,k be a family of decompositions.Assume that { µ ,k } k ∈ N is tight. Then { µ ,k } k ∈ N is tight. As we stated before the proof of Theorem 3.5, a family of measures canfail to be tight either by being translated to ±∞ or by becoming more spreadout. For t ∈ (0 , µ is t-centered if F µ ( s ) < t for s < F µ ( s ) ≥ t for s ≥
0. Right continuity of the distribution functionimplies that a measure µ has a unique t -centered translation. Observe thatwhen t = 1 / t -centered is simply the more familiar median 0. Now, ifwe assume that we have a family of decompositions µ = µ ,k ⊞ µ ,k where { µ ,k } k ∈ N are assumed to be t -centered, then the supports of these measuresare not being sent to ∞ . By 3.5, we have the following corollary. JOHN D. WILLIAMS
Corollary . Let µ = µ ,k ⊞ µ ,k where { µ ,k } are t -centered where t is allowed to range over a compact subset of (0 , . Then { µ i,k } k ∈ N formsa tight family. The following variation will prove useful in what follows.
Corollary . Let { µ n } n ∈ N be a tight sequence of measures. Assumethat to each member of this family we associate a family of decompositions µ n = ν n,k ⊞ ρ n,k for k ∈ N . Then we may translate our measures so asto form tight families { ˆ ν n,k } n,k ∈ N and { ˆ ρ n,k } n,k ∈ N with the property that µ n = ˆ ν n,k ⊞ ˆ ρ n,k for all n, k ∈ N . Proof.
We assume that each ν n,k has median 0.Assume that { ν n ( i ) ,k ( i ) } i ∈ N has no subconvergent sequence. Let µ be acluster point of { µ n ( i ) } i ∈ N . By Lemmas 2.2 and 3.1, we have that thereexists a truncated cone Γ α,β so that for i large enough, F − µ , F − µ n ( i ) , F − ν n ( i ) ,k ( i ) and F − ρ n ( i ) ,k ( i ) are all defined and satisfy F − ν n ( i ) ,k ( i ) ( z ) + F − ρ n ( i ) ,k ( i ) ( z ) − z = F − µ n ( i ) ( z ) → F − µ ( z ) uniformly over compact subsets of Γ α,β .Now, since we have centered our measures ν n ( i ) ,k ( i ) by assuming median0, the lack of a convergent subsequence amounts to assuming thatlim inf i (sup t ∈ R ν n ( i ) ,k ( i ) ([ t − a, t + a ])) → γ < a ∈ R + . At this point, one need only observe that every step of theproof of Theorem 3.5 holds under the weaker assumption that F − ν n ( i ) ,k ( i ) ( z ) + F − ρ n ( i ) ,k ( i ) ( z ) − z → F − µ ( z ) as opposed to assuming outright equality. Thiscompletes our proof.
4. A Khintichine Decomposition for Additive Free Convolution.
Lemma . Let { µ i } i ∈ I be a tight family of probability measures. Then,for every C > , there exists a Stolz angle Γ α,β such that | ϕ ′ µ i ( z ) | ≤ C | z | forall z ∈ Γ α,β and n ∈ N Proof.
It was shown in the proof of Theorem 5 . { µ i } i ∈ I , there exists an α > F µ i = z (1 + o (1)) uniformly as | z | ↑ ∞ for z ∈ Γ α, . Thus, for fixed C >
0, we mayfind a β large enough so that | ϕ µ i ( z ) | = | F − µ i ( z ) − z | ≤ C | z | for z ∈ Γ α,β , n ∈ N . By Cauchy’s formula, | ϕ ′ µ i ( z ) | = (2 π ) − | Z | ζ | =1 ϕ µ i ( z )( ζ − z ) dζ | ≤ C | z | HINTCHINE DECOMPOSITION. We will now define the functional that will be the main tool in the proofof our main theorems. Let µ be a probability measure. Let M be the set ofall median 0 probability measures ν satisfying µ = ν ⊞ ρ for some probabilitymeasure ρ . It is a consequence of Corollary 3.7 that this is a tight familyof measures. Let Γ α,β be a Stolz angle on which F − µ is defined and forwhich Lemma 4.1 is satisfied with regard to M . Consider the set Γ ′ = { z ∈ C + : α + 1 > ℑ ( z ) > α, ℑ ( z ) > β ℜ ( z ) } ⊂ Γ α,β and let M Γ ′ be the set ofprobability measures ν such that ϕ ν has analytic extension to Γ ′ such that ℑ ϕ ν ( z ) ≤ z ∈ Γ ′ . For ν ∈ M Γ ′ , let Λ( ν ) := − R Γ ′ ℑ ϕ ν ( z ) dA ( z ) where A denotes the area measure.Observe that, by Lemma 4.2, for any decomposition µ = ρ ⊞ ν we havethat ρ, ν ∈ M Γ ′ . Furthermore, we claim the following properties for ourfunctional Λ.1. Λ is weakly continuous.2. Λ( ν ⊞ ρ ) = Λ( ν ) + Λ( ρ ) for all ν, ρ ∈ M Γ ′ .3. 0 ≤ Λ( ν ) < ∞ for all ν ∈ M Γ ′ . Λ( ν ) = 0 if and only if ν is a Diracmass.4. Λ( ν ⊞ δ t ) = Λ( ν ) for all t ∈ R and ν ∈ M Γ ′ .The only fact that requires argument is that Λ( ν ) = 0 if and only if ν isa Dirac mass. One direction is clear since the Voiculescu transform of aDirac mass is simply a real constant. Furthermore, since −ℑ ( ϕ ν ( z )) ≥ z ∈ Γ ′ , Λ( ν ) = 0 implies that −ℑ ( ϕ ν ( z )) ≡ z ∈ Γ ′ . Analyticcontinuation implies that ϕ ν is a real constant which implies that ν is aDirac mass. Theorem . Let µ be a probability measure with the property that forevery non-trivial decomposition µ = µ ⊞ µ , neither µ nor µ is indecom-posable. Then µ is infinitely divisible. Proof.
We first note that for every ǫ >
0, there exists a decomposition µ = µ ⊞ µ such that 0 < Λ( µ ) < ǫ . Assume otherwise and let α > µ . By Theorem 3.5,there exists a sequence of decompositions µ = µ ,k ⊞ µ ,k so that the families { µ i,k } ∞ k =1 are tight and so that Λ( µ ,k ) → α . Taking weak cluster points µ and µ , by weak continuity of both Λ and ⊞ we have that µ = µ ⊞ µ andΛ( µ ) = α . By assumption, µ has a nontrivial decomposition µ = ν ⊞ ν .Since neither component is a Dirac mass, we have that α > Λ( ν i ) > µ = ν ⊞ ( ν ⊞ µ ) violates minimality of α . JOHN D. WILLIAMS
We now claim that for every t ∈ (0 , Λ( µ )) there exists a decomposition µ = µ ⊞ µ such that Λ( µ ) = t . To see this, let α be the supremum of allvalues of Λ( µ ) that are ≤ t . The previous paragraph implies that α > µ = µ ,k ⊞ µ ,k so that Λ( µ ,k ) ↑ α so that thecluster points µ i satisfy µ = µ ⊞ µ and Λ( µ ) = α . If α < t , by the aboveargument, we can break a chunk of size less than t − α from µ so as toattain a contradiction. Thus, Λ takes values on all of (0 , Λ( µ )) as it rangesover divisors of µ .By induction, for every n ∈ N we can find a decomposition µ = µ n, ⊞ . . . ⊞ µ n,n ⊞ δ x n such that Λ( µ n,i ) = Λ( µ ) /n and µ n,i has median 0 for all i =1 , . . . , n . The real number x n is the shift constant that necessarily arises whencentering these measures. We now claim that the array { µ n,j } n ∈ N ,j =1 ,...,n converges to δ uniformly as n ↑ ∞ .Observe that Corollary 3.7 implies that our array is tight. Let ν be anycluster point and let { µ k n ,j n } n ∈ N be a subsequence converging to ν . ByLemma 2.3, ϕ { µ kn,jn } ( z ) → ϕ ν ( z ) uniformly on compact subsets of a Stolzangle Γ ∗ ⊆ Γ. Now, observe that Γ ′ and Γ ∗ may be disjoint. However, thereexist a, b ∈ R such that ia ∈ Γ ′ and ib ∈ Γ ∗ .Observe that ϕ µ kn,jn is a normal family on Γ ′ ∪ i [ a, b ], which implies pre-compactness. By analytic continuation, any cluster point must agree with ϕ ν on i [ a, b ] ∩ Γ ∗ . This implies that ϕ ν has analytic continuatin to Γ ′ thatsatisfies ϕ ν ( z ) = lim n ↑∞ ϕ µ kn,jn ( z ) for z ∈ Γ ′ . Now, observe that the factthat Λ( µ k n ,j n ) → − R Γ ′ ϕ µ kn,jn ( z ) dA ( z ) →
0. By Lemma 4.1,we have a bound on the derivatives of these functions so that, recalling thatthe imaginary parts of these functions are negative, ℑ ϕ µ kn,jn ( z ) → z ∈ Γ ′ . This implies that ℑ ϕ ν ( z ) = 0 for z ∈ Γ ′ . Thus, ν is a dirac mass andour median 0 assumption implies that ν = δ Thus, our array is tight and every subsequence converges to δ . Thisimplies that our array converges to δ uniformly over n . By Theorem 2.5, µ is infinitely divisible. Lemma . Let { µ n } n ∈ N be a sequence of t -centered measures that con-verge weakly to µ . Assume that for s ∈ R such that F µ ( s ) = t , we have that F µ is continuous and strictly increasing in a neighborhood s . Then s = 0 or,in other words, µ is t -centered. Proof.
Choose ǫ > F µ is continuous on ( s − ǫ, s + 2 ǫ ). Let0 < ǫ ′ < ǫ and observe that utilizing the L´evy metric and our assumptionof weak convergence, we have the following inequality for n large enough, HINTCHINE DECOMPOSITION. independent of ǫ : F µ ( s − ǫ − ǫ ′ ) − ǫ ′ ≤ F µ n ( s − ǫ ) ≤ F µ ( s − ǫ + ǫ ′ ) + ǫ ′ By continuity of F µ at these points, it follows that F µ n ( s − ǫ ) → F µ ( s − ǫ ).Similarly F µ n ( s + ǫ ) → F µ ( s + ǫ ). Thus, for n large enough, we have that F µ n ( s − ǫ ) < t and F µ n ( s − ǫ ) > t . This implies that 0 ∈ ( s − ǫ, s + ǫ ). As ǫ was arbitrary, this implies that s = 0.It is clear from the statement of the previous lemma that it will be usedin conjunction with Corollary 2.8. Indeed, it is precisely the content of thiscorollary that measures with non-trivial decompositions satisfy the hypothe-ses in Lemma 4.3, which will play a small but key role in the proof of thefollowing theorem. Theorem . Let µ be a probability measure. Then there exist measures µ i with i = 0 , , , . . . such that µ is ⊞ -infinitely divisible, µ i is indecompos-able for i = 1 , , . . . , and µ = µ ⊞ µ ⊞ µ ⊞ · · · . This decomposition is notunique. Proof. If µ is infinitely-divisible, then we are done. If not, by Theorem4.2, µ has non-trivial divisors. Otherwise, let α = sup { Λ( ρ ) } where thesupremum is taken over all indecomposable probability measures ρ satisfying µ = ν ⊞ ρ for some probability measure ν . Let µ be chosen so that µ = µ , ⊞ µ , Λ( µ ) > α / µ is indecomposable. By translating our measures, µ is assumed to be t -centered for a t to be chosen later (for the real number s such that µ ⊞ δ s is t -centered, we need only consider the decomposition µ = ( µ , ⊞ δ − s ) ⊞ ( µ ⊞ δ s ) and all of the relevant properties will be satisfied).At the n th stage of this process, we let α n − = sup { Λ( ρ ) } where thesupremum is taken over all indecomposable probability measures ρ satisfying µ ,n − = ν ⊞ ρ for some measure ν (unless µ ,n − is infinitely divisible, atwhich point we are done). We then let µ n be chosen such that µ ,n − = µ ,n ⊞ µ n , Λ( µ n ) > α n /
2, and µ n is indecomposable. By translating µ ,n and µ n , we may further assume that µ ⊞ · · · ⊞ µ n is t -centered. If at anypoint α n = 0, then by Theorem 4.2, we are done. We therefore assume that α n > n ∈ N In what follows, we utilize the following notation: ν n = µ ⊞ · · · ⊞ µ n ν n,m = µ m +1 ⊞ · · · ⊞ µ n JOHN D. WILLIAMS ν ∞ ,m = lim n ↑∞ µ m +1 ⊞ · · · ⊞ µ n where we will show at a latter point that the latter actually converges.Note that Corollary 3.8 implies that { ν n,m } n,m ∈ N is a tight family. Itfollows that { ν n } n ∈ N is also tight. We now claim that this sequence of mea-sures is actually convergent for an appropriate choice of t in the sense of t -centeredness.Proceeding with our claim, observe that Λ( µ ) = Λ( µ ,n ) + Λ( ν n ) =Λ( µ ,n ) + Λ( ν m ) + Λ( ν n,m ) for all m < n ∈ N . Observe that Λ( µ ,n ) isbounded and decreasing so necessarily converges. This implies that Λ( ν n,m )represents the tail of a convergent series and must therefore go to 0 uniformlyas m ↑ ∞ (note that this implies that α n → ν n,m be the translation of ν n,m with median 0 and observe thatΛ( ν n,m ) = − R Γ ′ ℑ ϕ ν n,m ( z ) dA ( z ) = − R Γ ′ ℑ ϕ ˆ ν n,m ( z ) dA ( z ) = Λ(ˆ ν n,m ). ByLemma 4.1, ϕ ′ ˆ ν n,m is bounded on Γ ′ . Since Λ(ˆ ν n,m ) → m ↑ ∞ , we havethat −ℑ ϕ ˆ ν n,m ( z ) → ′ as m ↑ ∞ . By Lemma 2.3, any clus-ter point ˆ ν of { ˆ ν n,m } n,m ∈ N must satisfy ϕ ν ( z ) = 0 for z ∈ Γ ′ . This impliesthat ˆ ν is a Dirac mass. Thus, any cluster point ν of { ν n,m } as m ↑ ∞ mustalso be a Dirac mass.Thus, the set of cluster points of { ν n } n ∈ N is of the form { ρ ⊞ δ r } r ∈ K where K is a compact subset of R . Since we are assuming that α n > n ∈ N , we have that ρ = µ ⊞ ν where ν is some non-trivial cluster pointof { ν n, } n ∈ N . In particular, ρ has a non-trivial decomposition so that, byCorollary 2.8, there exist points s ∈ R and t ∈ (0 ,
1) such that F ρ ( s ) = t and F ρ is continuous and increasing in a neighborhood of s . We thereforeassume that { ν n } n ∈ N are t -centered (we may do this retroactively since thisonly translates our measures ν n and does not effect the fact that they clusterto translations of ρ ). By 4.3, all cluster points of { ν n } n ∈ N must be t -centeredso that, by uniqueness of this property, our sequence converges to a singlemeasure.Now, observe that these facts together imply that { ν n,m } n,m must con-verge to the Dirac mass at 0 as m ↑ ∞ . This further implies that ν ∞ ,m isthe limit of a convergent sequence. We next claim that if µ is any clusterpoint of { µ ,n } n ∈ N , then µ = lim n ↑∞ µ ⊞ ν n .To see this, let i n be a subsequence along which µ ,n converges to µ .Observe that lim n ↑∞ µ ⊞ ν n = lim n ↑∞ µ ,i n ⊞ ν n = lim n ↑∞ µ ,i n ⊞ ν i n ⊞ ν n,i n =lim n ↑∞ µ ⊞ ν n,i n . As n → ∞ , the right hand side converges to µ ⊞ δ = µ ,proving our claim.We have shown that µ = lim n ↑∞ µ ⊞ ν n so that our theorem will beproven once we show that µ is infinitely divisible. Towards this end, we HINTCHINE DECOMPOSITION. claim that µ k, = µ ⊞ ν ∞ ,k +1 for all k ∈ N . To see this, observe thatthe right hand side is equal to µ ⊞ ν ∞ ,k +1 = lim n ↑∞ µ ,i n ⊞ ν ∞ ,k +1 =lim n ↑∞ µ ,i n ⊞ ν i n ,k +1 ⊞ ν ∞ ,i n = lim n ↑∞ µ ,k ⊞ ν ∞ ,i n → µ ,k as n → ∞ . Thisproves our claim.Now, assume that µ has a decomposition µ = ρ ⊞ ν where ν is indecom-posable. Assume that Λ( ν ) >
0. Pick n large enough so that α n < Λ( ν ) andrecall that µ n, = µ ⊞ ν ∞ ,n +1 . The left hand side has no indecomposabledivisior whose Λ value is larger than α n . This contradiction implies that µ has no indecomposable divisors so that, by Theorem 4.2, our theoremholds.The failure of uniqueness will be addressed in Section 10.
5. Background and Terminology for the Multiplicative Convo-lution of Measures Supported on the Positive Real Line.
Let x, y be positive random variables in (
A, τ ) with respective distributions µ and ν . We denote by µ ⊠ ν the distribution of the random variable xy . Since τ is a trace, the distribution of xy is the same as that of y / xy / , so that ⊠ preserves the property that the distribution is a measure supported on thepositive real numbers.Let M R + denote the set of probability measures supported on R + . Observethat, with exception of δ , all such measures have nonzero first moment andwe assume throughout that we are not dealing with this measure. Considerthe following function: ψ µ ( z ) = Z ∞ zt − zt dµ ( t )for z ∈ C \ R + . As seen in [19] and [10], ψ µ | i C + is univalent and maps intoan open neighborhood about the interval ( µ ( { } ) − , ψ µ ( i C + ) ∩ R = ( µ ( { } ) − , µ = ψ µ ( i C + ) and let χ µ : Ω µ → i C + denote the inverse function.We refer to the S -tranform as the following function: S µ ( z ) = (1 + z ) χ µ ( z ) z These functions have the following properties which will be used, often with-out reference, in what follows:1. S µ ⊠ ν ( z ) = S µ ( z ) S ν ( z ) for all z in their common domain.2. S µ ( z ) > S ′ µ ( z ) ≤ z ∈ ( µ ( { } ) − , µ ⊠ µ )( { } ) = max { µ ( { } ) , µ ( { } ) } JOHN D. WILLIAMS χ ′ µ ( z ) > z ∈ ( µ ( { } ) − , χ µ ⊠ δ c ( z ) = χ µ ( z ) /c and S µ ⊠ δ c ( z ) = S µ ( z ) /c .Observe that (3) above implies a multiplicative version of Lemma 3.1.That is, for any nontrivial decomposition µ = µ ⊠ µ , (3) implies that realpart of the domain of χ µ is contained in the real part of the domain of χ µ i for each i = 1 ,
2. We will use this fact without reference throughout.The following results on convergence and tightness were first proven infull generality in [10].
Lemma . Let { µ n } n ∈ N and { ν n } n ∈ N be sequence of probability mea-sures on R + . Assume the these sequences weakly converge to µ and ν respec-tively. Then { µ n ⊠ ν n } n ∈ N converges to µ ⊠ ν in the weak ∗ topology. Lemma . Let M be a set of probability measures on R + . The followingconditions are equivalent.1. M is tight and the weak ∗ closure of M does not contain δ .2. There exists an α > such that(a) − α belongs to the domain of χ µ for all µ ∈ M .(b) sup {| χ µ ( − α ) | : µ ∈ M } < ∞ (c) inf {| χ µ ( − β ) | : µ ∈ M |} > for all β ∈ (0 , α ) .3. There exists an α > such that(a) − α belongs to the domain of S µ for all µ ∈ M .(b) sup {| S µ ( − α ) | : µ ∈ M } < ∞ (c) inf {| S µ ( − β ) : µ ∈ M |} > for all β ∈ (0 , α ) . Lemma . Let { µ n } n ∈ N be a tight sequence of probability measures on R + such that δ is not in the weak ∗ closure of our sequence. The followingare equivalent:1. The sequence { µ n } n ∈ N converges to a measure µ in the weak ∗ topology.2. There exist positive numbers β < α such that the sequence { χ µ n } con-verges uniformly on the interval ( − α, − β ) to a function χ .3. There exist positive numbers β < α such that the sequence { S µ n } con-verges uniformly on the interval ( − α, − β ) to a function S .Moreover, if (1) and (2) are satisfied, we have χ = χ µ in ( − α, − β ) . In a manner analagous to the additive case, we have the following subor-dination result for multiplicative convolution. This was first proven in fullgenerality in [13] and is proven by different means in [6].
HINTCHINE DECOMPOSITION. Theorem . Let µ be a probability measure on R + with decomposition µ = µ ⊠ µ . There exist analytic subordination functions ω i : C \ R + → C \ R + for i = 1 , , such that:1. ω i (0 − ) = 0
2. for every λ ∈ C + we have that ω i (¯ λ ) = ω i ( λ ) , ω i ( λ ) ∈ C + andarg ( ω j ( λ )) ≥ arg ( λ ) ψ µ ( λ ) = ψ µ i ( ω i ( λ )) for all λ ∈ C \ R + ω ( λ ) ω ( λ ) = λψ µ ( λ )Consider next the following result which may be found in [5]. Theorem . Let η : Ω → C \ { } be an analytic function such that η (¯ z ) = η ( z ) for all z ∈ Ω . The following are equivalent:1. There exists a probability measure µ = δ on [0 , ∞ ) such that η = ψ µ / (1 + ψ µ ) .2. η (0 − ) = 0 and arg ( η ( z )) ∈ [ arg ( z ) , π ) for all z ∈ C + . These two theorems may be combined to give us the following corollary.We have no direct reference for this fact but can be sure that it is well knowand are recording it only for the reader’s convenience.
Corollary . Let ω i be a subordination function arising from thedecomposition µ = µ ⊠ µ as above. Then ω i ( z ) = ψ ν ( z )1 + ψ ν ( z ) for a probability measure ν with the property that supp ( ν ) ⊆ supp ( µ ) Proof.
The existence of such a representation is a direct consequenceof the previous theorems. It remains to prove to the assertion about thesupport of ν .In the proof of Theorem 6.1 in the next section, we will show that ω i willhave analytic continuation and is real on R \ (supp( µ ) − ) where supp( µ ) − = { t − : t ∈ supp( µ ) } . This implies that ℑ ψ ν ( t + iǫ ) → ǫ → t / ∈ (supp( µ ) − ). Since G ν (1 /z ) = z ( ψ ν ( z ) + 1), this implies that t − / ∈ supp( ν ).Our claim follows.This final result was first proven in [7] and will be used in proving amultiplicative version of Theorem 4.2. JOHN D. WILLIAMS
Theorem . Consider ( c n ) n ∈ N ⊆ R and an array { µ n,j } n ∈ N , j =1 , ,...k n of probability measures on (0 , ∞ ) such that lim n →∞ min ≤ j ≤ k n µ n,j ((1 − ǫ, ǫ )) = 1 for every ǫ > . If the measures δ c n ⊠ µ n, ⊠ · · · ⊠ µ n,k n have a weak limit µ which is a probability measure, then µ is infinitely divisible. Observe that the assumptions in this theorem may be weakened so thatwe need only assume that µ n,j ( { } ) = 0 for all n ∈ N and j = 1 , , . . . , k n .Indeed, every element in such an array can be approximated arbitrarily wellby a measure supported on (0 , ∞ ). It is under this weakened assumptionthat we will later invoke this theorem.
6. Compactness Results for Measures Supported on the PositiveReal Half-Line.
We define logdiam( µ ) := sup x,y ∈ supp( µ ) ( | log( x ) − log( y ) | )to be the logarithmic diameter of the measure µ . Theorem . Let µ be a compactly supported probability measure on R + . Then for any decomposition µ = µ ⊠ µ we have that logdiam ( µ i ) ≤ logdiam ( µ ) . If µ ( { } ) = 0 then equality occurs if and only if one of the µ i isa Dirac mass. Proof. If { } is contained in the support of µ , the theorem is trivial.Thus, we assume that [ α, β ] = conv(supp( µ )) and [ α , β ] = conv(supp( µ ))with α, α >
0. Observe that ψ µ has analytic extension to R \ [ β − , α − ].We claim that the subordination function ω does also.To see this, note that ψ µ ( ω ( te iθ )) = ψ µ ( te iθ ) = ( G µ (1 /te iθ ) /te iθ ) + 1for t ∈ R \ [ β − , α − ]. Since 1 /t is not contained in the support of µ , theStieltjes inversion formula tells us that the imaginary part of the right handside goes to zero as θ goes to 0. Since ψ µ increases argument, the imaginarypart of ω ( te iθ ) must go to zero. The Schwarz reflection principle impliesthat ω extends analytically across t .As we saw in Corollary 5.6, we have that ω ( z ) = ψ ν ( z ) / (1 + ψ ν ( z ) ) for ν supported on [ α, β ]. Thus, ω ′ ( z ) = ( R t (1 − zt ) − dν ( t )) / ( R (1 − zt ) − dν ( t )) so that lim λ ↑∞ ω ′ ( λ ) = ( R t − dν ( t )) − . We call this limit ω ′ ( ∞ ). HINTCHINE DECOMPOSITION. We now claim that λω ′ ( ∞ ) − ω ( λ ) → C < λ ↑ ∞ . Indeed, ω ( λ ) − λω ′ ( ∞ ) = R βα tλ − tλ dν ( t ) R βα − tλ dν ( t ) − λ R βα t − dν ( t )= R βα t − dν ( t ) R βα tλ − tλ dν ( t ) − λ R βα − tλ dν ( t ) R βα t − dν ( t ) R βα − tλ dν ( t )= λ R βα t − dν ( t ) R βα tλ − − t dν ( t ) − R βα λ − − t dν ( t ) R βα t − dν ( t ) R βα λ − − t dν ( t )= λ R βα t − dν ( t )(1 + R βα tλ − − t dν ( t )) − ( R βα t − dν ( t ) + R βα λ − − t dν ( t )) R βα t − dν ( t ) R βα λ − − t dν ( t )= R βα t − dν ( t ) R βα λ − − t dν ( t ) − R βα t ( λ − − t ) dν ( t ) R βα t − R βα λ − − t dν ( t ) → − ( R βα t − dν ( t )) + R βα t − dν ( t ) − ( R βα t − ) = C as λ ↑ ∞ . Note that f ( t ) = t is a strictly convex function on [ α, β ]. Assum-ing that ν is not a Dirac mass, it follows from Jensen’s inequality that C isa strictly negative number (we may assume that ν is not a Dirac mass sincethis would imply that µ is a Dirac mass and our theorem is trivially truein this case).Now, by Cauchy-Schwarz, we have that | ω ′ ( z ) | ≥ ω ′ ( ∞ ) for all z ∈ C + \ [ β − , α − ]. Indeed, we have that | ω ′ ( z ) | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) R t (1 − zt ) dν ( t )( R − zt dν ( t )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = k √ tz − t k (cid:12)(cid:12)(cid:12)D √ t , √ tz − t E(cid:12)(cid:12)(cid:12) ≥ ω ′ ( ∞ )Thus, ω ( α − + ǫ ) = ω ( λ ) − Z λα − + ǫ ω ′ ( t ) dt ≤ ω ( λ ) − λω ′ ( ∞ ) + ( α − + ǫ ) ω ′ ( ∞ )which converges to ( α − + ǫ ) ω ′ ( ∞ ) + C as λ ↑ ∞ .To complete our claim, note that ω ( β − − ǫ ) = ω (0) + Z β − − ǫ ω ′ ( t ) dt ≤ ω ′ ( ∞ )( β − − ǫ ) JOHN D. WILLIAMS since ω (0) = 0. Thus, R + \ [ ω ′ ( ∞ ) β − , ω ′ ( ∞ ) α − + C ] ⊆ ω ( R + \ [ β − , α − ]). Since ψ µ can be continued analytically to the right hand set, we have thatit also has analytic continuation to R + \ [ ω ′ ( ∞ ) β − , ω ′ ( ∞ ) α − + C ]. Thisimplies that the support of µ is contained in ([ ω ′ ( ∞ ) β − , ω ′ ( ∞ ) α − + C ]) − ⊆ ω ′ ( ∞ ) − [ α, β ] with equality if and only if one of the µ i is a Diracmass. The theorem follows. Theorem . Let µ be a probability measure with supp ( µ ) ⊂ R + differ-ent from δ . Let µ = µ ,k ⊠ µ ,k be a family of decompositions. There exists asequence { λ k } ⊂ R + so that the families { µ ,k ◦ D λ k } k ∈ N and { µ ,k ◦ D λ − k } k ∈ N are tight. Furthermore, δ is not in the weak closure of either of these fam-ilies of measures. Proof.
Let 1 − µ ( { } ) = − α <
0. Recall that ψ µ maps the negativehalf line injectively onto ( − α, k , ψ µ k maps thenegative half line injectively onto (1 − µ i,k ( { } ) ,
0) and that µ i,k ( { } ) ≤ µ ( { } ). Thus, for each k , there exists a unique real number λ k so that ψ µ ,k ◦ D λk ( −
1) = − α/
2. Denote the new measure by ν ,k . Dilate µ ,k by D λ − k and denote the new measure by ν ,k . Observe that µ = ν ,k ⊠ ν ,k forall k ∈ N .Now, observe that − α/ χ ν ,k and that | χ ν ,k ( − α/ | = 1 for all k ∈ N . By Lemma 5.2, if we can show thatinf k ∈ N | χ ν ,k ( − β ) | > β ∈ (0 , α/ { ν ,k } is tight.Consider the following equation for t ∈ (0 , α/ − t + 1 − t χ ν ,k ( − t ) χ ν ,k ( − t ) = χ µ ( − t )Assume that for β ∈ (0 , α/ k ∈ N ( χ ν ,k ( − β )) = 0. Ourassumption that µ = δ implies that χ µ ( − β ) >
0. Manipulating (6.1), thisimplies that { χ ν ,k ( − β ) } are unbounded over k and negative. As χ ′ ν ,k ( t ) > { χ ν ,k ( − α/ } are unbounded over k . However, (6.1) andthe assumption that χ ν ,k ( − α/ ≡ − { ν ,k } is a tight family.It is easily seen that { ν ,k } is also a tight family. Indeed, χ ν ,k ( − α/ ≡ − χ ν ,k ( − α/ ≡ α/ − α/ χ µ ( − α/ β ∈ (0 , α/ | χ ν ,k ( − β ) | = βχ µ ( − β )(1 − β ) χ ν ,k ( − β ) ≥ − βχ µ ( − β )(1 − β ) > HINTCHINE DECOMPOSITION.
7. A Khintchine Decompostion for Multiplicative Free Convo-lution with Measures Supported on the Postive Half Line.
Theorem . Let µ be a probability measure with the property that, forany non trivial decomposition µ = µ ⊠ µ , neither µ nor µ is indecom-posable. Then µ is ⊠ -infinitely divisible. Proof.
Let α = 1 − µ ( { } ). We will show later that α = 1. Recall that S µ , S µ and S µ are all defined on an open neighborhood of ( − α,
0) forany decomposition µ = µ ⊠ µ . We assume without loss of generality that S µ ( − β ) = 1 for some β ∈ (0 , α ) (indeed, pick any β in this interval and thenconsider µ ⊠ δ c where c = ( − βχ µ ( − β )) / (1 − β ) ).We denote by M β the set of all probability measures ν ∈ M R + such that S ν ( − β ) = 1 and µ = ν ⊠ ρ for a probability measure ρ ∈ M R + . Observe that S µ ( − β ) = S ν ( − β ) S ρ ( − β ) implies that ρ ∈ M β . Further note that for anydecomposition µ = ν ′ ⊠ ρ ′ there exists a real number c such that ν ′ ⊠ δ c , ρ ⊠ δ c − ∈ M β . Lastly, it is the content of Theorem 6.2 that M β is weak ∗ compact.Fix γ ∈ (0 , β ). We claim that given any ǫ >
0, there exists an element ν ∈ M β such that 1 > S ν ( − γ ) > − ǫ . To show this, assume instead thatthere is a δ > − δ is the supremum of S ν ( − γ ) ranging over allnontrivial elements in M β . By compactness, we may pass to a cluster point,and assume that we have a decomposition µ = µ ⊠ µ where S µ ( − γ ) takeson this supremum. Now, by assumption, we have a nontrivial decomposition µ = ν ⊠ ν where S ν i ( − β ) = 1 for i = 0 ,
1. Since S ′ ν i ≤
0, this implies thatboth S ν i ( − γ ) < ν i were a Dirac mass,which we have assumed away). As their product satisfies S ν ( − γ ) S ν ( − γ ) = S µ ( − γ ) = 1 − δ , we have that S ν i ( − γ ) > − δ for i = 1 ,
2. Thus, thedecomposition ν ⊠ ( ν ⊠ µ ) violates the above supremum.We next claim that S ν ( − γ ) takes on all values of the interval [ S µ ( − γ ) , M β . Clearly our compactness result implies thatthe range of the S ν ( − γ ) is closed. We assume, for the sake of contradiction,that there exists real numbers δ > λ > S µ ( − γ ) such that S ν ( − γ )does not take on any values in the interval ( λ − δ, λ ) for ν ∈ M β and thatthis interval is maximal in this regard. Passing to cluster points, we assumethat S µ ( − γ ) = λ for a decomposition µ = µ ⊠ µ . Now, pick a nontrivialdecomposition µ = ν ⊠ ν so that S ν ( − γ ) is close enough to 1 so that λS ν ( − γ ) ∈ ( λ − δ, λ ). Transferring this mass, we obtain our contradiction. JOHN D. WILLIAMS
By induction, there exists a decomposition µ = µ n, ⊠ · · · ⊠ µ n,n suchthat S µ n,i ( − β ) = 1 and S µ n,i ( − γ ) = n p S µ ( − γ ) for all n ∈ N and i =1 , , . . . , n . Observe that this implies that S µ n,i ( − t ) → t ∈ ( γ, β ) and n ∈ N ( S µ n,i is non-increasing on this interval). By Lemma 5.3this implies that any subsequence of our array { µ n,i } n ∈ N , i =1 , ,...,n convergesto δ . Compactness implies that our array converges to δ uniformly over n . Lastly, note that this implies that our measures satisfy µ n,i ( { } ) = 0.Indeed, observe that max i =1 , ,...,n µ n,i ( { } ) → δ . Since µ ( { } ) = max i =1 , ,...,n µ n,i ( { } ) we must have nomass at 0 for µ or for any element in our array.Thus, we may now invoke Theorem 5.7 which implies that our measure µ is ⊠ -infinitely divisible. Theorem . Let µ ∈ M R + different from δ . Then there exist mea-sures µ i with i = 0 , , , . . . such that µ is ⊠ -infinitely divisible, µ i is ⊠ -indecomposable for i = 1 , , . . . , and µ = µ ⊠ µ ⊠ µ ⊠ · · · . This decompo-sition is not unique. Proof.
We again assume without loss of generality that S µ ( − β ) = 1 forsome β ∈ (0 , − µ ( { } )). In what follows, all decompositons will be takenfrom elements in M β Pick γ ∈ (0 , β ). Let α = S µ ( − γ ) ≤ µ = δ ,in which case the theorem is trivially true). Now, let α = inf { S ν ( − γ ) } where the infimum is taken over all indecomposable ν ∈ M β . If α = 1 then,by Theorem 7.1, our theorem holds. If not, let µ = µ , ⊠ µ with µ ∈ M β indecomposable satisfying S µ ( − γ ) > √ α .At the n th stage of this process, we start with a decompostion µ = µ ,n − ⊠ µ ⊠ µ n − where all divisors are elements of M β and µ i is inde-composable for i = 1 , , . . . , n −
1. We let α n − = inf { S ν ( − γ ) } where theinfimum is taken over all indecomposable ν ∈ M β such that µ ,n − = ν ⊠ ρ for some ρ ∈ M β (observe that µ ,n − , ν ∈ M β implies that ρ ∈ M β ) . If atany point α n = 1 then, by Theorem 7.1, we are done. Thus, we assume that α n < n ∈ N . Let µ ,n − = µ ,n ⊠ µ n where µ n ∈ M β is indecompos-able and satisfies S µ n ( − γ ) > √ α n . At this point, we have a decomposition µ = µ ,n ⊠ µ ⊠ · · · ⊠ µ n satisfying µ ,n , µ i ∈ M β , µ i is indecomposableand S µ i ( − γ ) > √ α i for all i = 1 , , . . . , n − n > m : ν n = µ ⊠ · · · ⊠ µ n ν n,m = µ m +1 ⊠ · · · ⊠ µ n HINTCHINE DECOMPOSITION. ν ∞ ,m = lim n ↑∞ µ m +1 ⊠ · · · ⊠ µ n We will show later that this last element actually converges to a measure in M β .Now, observe that { ν n,m } m 1. Pick n such that α n < S ν ( − γ ). As µ ,n = µ ⊠ ν ∞ ,n and the left hand side has no indecomposable divisors satisfying the aboveinequality, we have a contradiction. Thus, µ has no nontrivial divisors sothat, by Theorem 7.1, our theorem holds. 8. Background and Terminology for Measures Supported on theUnit Circle. Let M T be the set of all Borel probability measures sup-ported on the unit circle. Let M ∗ be the set of all Borel probability measureson C with nonzero first moment. For a measure µ ∈ M ∗ ∩ M T the followingdefinition: ψ µ ( z ) = Z T zt − zt dµ ( t )Observe that ψ µ (0) = 0 and ψ ′ µ (0) = R C tdµ t so that our assumption ofnonzero first moment implies that ψ − µ = χ µ is defined and analytic in JOHN D. WILLIAMS neighborhood of 0. We again define S µ ( z ) = (1 + z ) χ µ ( z ) /z . Observe that S µ (0) = 1 /ψ ′ µ (0) so that S µ is also defined and analytic in a neighborhoodof 0. Further note that | ψ ′ µ (0) | = (cid:12)(cid:12)(cid:12)(cid:12)Z T ζdµ ( ζ ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z T | ζ | dµ ( ζ ) = 1which implies that | S µ (0) | ≥ µ ∈ M ∗ ∩ M T .We now record the following lemmas and theorems for use in proving ourmain results. These were first proven in [19], [9] and [7]. Lemma . Let µ ∈ M ∗ ∩ M T satisfy | S µ ( { } ) | = 1 . Then µ = δ α forsome α ∈ T Lemma . Let µ i ∈ M ∗ ∩ M T be such that S µ i ( z ) converge uniformly insome neighborhood of to a function S ( z ) . Then there exists µ ∈ M ∗ ∩ M T such that S = S µ Theorem . Consider µ ∈ M ∗ ∩ M T and let µ i ∈ M T for i ∈ N .If µ i converge to µ in the weak ∗ topology, the µ i ∈ M ∗ ∩ M T eventuallyand the functions S µ i converge to S µ uniformly in some neighborhood ofzero. Conversely, if µ i ∈ M ∗ ∩ M T and S µ i converge to S µ uniformly insome neighborhood of zero then the measures µ i converge to µ in the weak ∗ topology. Theorem . Let c n ∈ T be a sequence of numbers and { µ n,j } n ∈ N , j =1 ,...,k n be and array of probability measures in M T such that lim n ↑∞ max j =1 ,...,k n µ n,j ( { z : | z − | < ǫ } ) = 1 for every ǫ > . If the measures δ c n ⊠ µ n, ⊠ · · · ⊠ µ n,k n have a weak limit µ , then µ is ⊠ -infinitely divisible. 9. Main Results for Measures Supported on the Unit Circle. The last case considered are measures µ ∈ M T ∩ M ∗ where M T are thoseprobability measures supported on the complex circle and M ∗ are those prob-ability measures with non-zero first moment. Observe that our decomposi-tions will be supported on the unit circle so that a family of decompositions µ = µ ,k ⊠ µ ,k are trivially tight. Theorem . Let µ ∈ M T ∩ M ∗ have the property that, for any non-trivial decomposition µ = ν ⊠ ω with ν, ω ∈ M T ∩ M ∗ , neither ν nor ω isindecomposable. Then µ is ⊠ -infinitely divisible. Proof. Let Λ : M T → C be defined by Λ( ν ) = S ν (0). Observe that | Λ( µ ) | ≥ µ is a Dirac mass situated on the HINTCHINE DECOMPOSITION. circle. We may then assume that | Λ( µ ) | = 1 + α > 1. In a manner analogousto Theorems 4.2 and 7.1, for every α > ǫ > 0, there exists a nontrivialdecomposition µ = ν ⊠ ω such that | Λ( ν ) | < ǫ . Through a similarmaximality argument, one can show that for every n ∈ N there exists adecomposition µ = µ n, ⊠ · · · ⊠ µ n,n such that | Λ( µ n,i ) | = n p | Λ( µ ) | for all i = 1 , , . . . , n . We forgo the proof due to extreme similarity to the first twocases.Now, observe that Λ( µ n,i ⊠ δ c ) = Λ( µ n,i ) /c for c ∈ T . Thus, we mayassume that µ = δ c n ⊠ µ n, ⊠ · · · ⊠ µ n,n for all n ∈ N where we additionallyassume that Λ( µ n,i ) = n p | Λ( µ ) | .Note that { µ n,j } n ∈ N ,j =1 , ,...,n forms a tight array since all of our measuresare compactly supported. Further observe that, by Theorem 8.3 any clusterpoint ν of this array satisfies Λ( ν ) = 1. By Lemma 8.1, this implies that ν = δ . Tightness implies that our array converges to δ uniformly over n .By Theorem 8.4, this implies ⊠ -infinite divisibility.We close with our Khinthine decomposition for measures in M T . Sev-eral steps of the proof are indistinguishable from Theorem 7.2 so are notpresented in full detail. Theorem . Let µ ∈ M T ∩ M ∗ be a probability measure. There existsa decomposition µ = µ ⊠ µ ⊠ µ ⊠ · · · such that µ i ∈ M T ∩ M ∗ for all i =0 , , , . . . , µ is infinitely divisible and µ i is indecomposable for i = 1 , , . . . .Such a decomposition need not be unique. Proof. In a manner entirely analogous with the previous cases, for all n ∈ N , we construct a decomposition µ = µ ,n ⊠ µ ⊠ · · · ⊠ µ n with the following properties:1. The measure µ i ∈ M T is indecomposable for all i ∈ N .2. Let α i − = sup | Λ( ν ) | where the supremum is taken over all indecom-posable measures ν ∈ M T satisfying µ , = ν ⊠ ρ for some ρ ∈ M T . Wehave that 1 ≤ Λ( µ i ) < √ α i (in particular, we may assume that Λ( µ i )is real).We again define ν n , ν n,m and ν ∞ ,m as in the proof of Theorem 7.2. Thatis ν n = µ ⊠ · · · ⊠ µ n ν n,m = µ m +1 ⊠ · · · ⊠ µ n JOHN D. WILLIAMS ν ∞ ,m = lim n ↑∞ µ m +1 ⊠ · · · ⊠ µ n Observe that tightness is trivial in this case since M T is compact. We thenhave that Λ( µ ) = Λ( µ ,n ) ∗ Λ( ν n ) = Λ( µ ,n ) ∗ Λ( ν m ) ∗ Λ( ν n,m ). Since Λ µ ,n is decreasing and bounded as n ↑ ∞ , this is a convergent sequence. Thisimplies that ν n,m represents the tail of a convergent product so that it goesto 0 as m ↑ ∞ (this implies that α i → { ν n,m } m 10. Applications. We begin by extending the class of ⊞ -indecomposablemeasures. Theorem . Let µ be a measure with the property that the left andright endpoints of the support of µ are Dirac masses. Then µ is indecompos-able. Proof. Assume that µ = µ ⊞ µ and that the support of µ has respectiveleft and right endpoints a and b . Recall that Theorem 2.7 states that µ ( { a } ) = µ ( { a } ) + µ ( { a } ) − µ ( { b } ) = µ ( { b } ) + µ ( { b } ) − a i , b i ∈ supp ( µ i ), and that these points satisfy a = a + a and b = b + b . Now, if a = b then µ ( { a } ) + µ ( { b } ) ≤ 1. Thus,0 < µ ( { a } ) + µ ( { b } ) = µ ( { a } ) + µ ( { a } ) + µ ( { b } ) + µ ( { b } ) − ≤ µ ( a ) + µ ( { b } ) − µ ( a ) + µ ( { b } ) > a = b . Translating our measures, wemay assume that a = b = 0. Thus, a = a and b = b . This implies thatdiam( supp ( µ )) ≥ diam( supp ( µ )). By Theorem 3.2, it follows that µ = δ so that µ is indecomposable. HINTCHINE DECOMPOSITION. Now, given a measure µ , it was proven by Nica and Speicher in [17] thatwe may associate to µ a semigroup of measures { µ t } t ≥ so that µ = µ and µ s + t = µ s ⊞ µ t . In particular, µ n = µ ⊞ · · · ⊞ µ , the n-fold free convolution.When µ is infinitely divisible, this family may be extended to t ∈ R + .It was shown in [4] that for µ = ( δ + δ − ) / 2, we have that µ t is a sumof two atoms concentrated at ± t and an absolutely continuous measureconcentrated on [ − √ t − , √ t − Corollary . For µ = ( δ + δ − ) / , the elements of the family ofmeasures { µ t } t ∈ [1 , are indecomposable. Observe that this family of examples also dashes any hope of uniquenessfor our Khintchine decomposition. Indeed, for µ and { µ t } t ≥ as in the previ-ous example we have that, for s = 2 + ǫ , µ s = µ t ⊞ µ s − t for all t ∈ (1 , ǫ ).This is an uncountable family of distinct decompositions of µ s into a sumof indecomposable elements.Note that the even the infinitely divisible divisor in the Khintchine com-position cannot be determined uniquely. Indeed denote by µ the semicircledistribution with mean 0 and variance 1, an infinitely divisible measure. Itwas shown in [11] the there is a nontrivial decomposition µ = ν ⊞ ρ whereneither ν nor ρ is infinitely divisible. Taking the Khintchine decompositionsfor each ν and ρ and combining the respective infinitely divisible divisors,we obtain a decomposition µ = µ ⊞ µ ⊞ µ ⊞ · · · such that µ infinitelydivisible, µ i indecomposable for i ≥ µ nontrivial. This implies that µ = µ .Lastly, it has come to the author’s attention that these results have beenaddressed independently in [14]. They rightly point out the following im-provement on Theorems 4.2 and 4.4. Namely, the class of measures thatsatisfy the hypotheses of Theorem 4.2 are precisely the Dirac measures.For a simple justification of this fact, note that we have shown that suchmeasures are necessarily infinitely divisible. It was shown in [10] that in-finitely divisible measures may be decomposed into the free convolution ofa semicircular measure and a free Poisson measure. Free Poisson measureshave indecomposable divisors, almost by definition. As was shown in [11],semicircular measures also have indecomposable divisors. These facts takentogether imply the above statement so that Theorem 4.4 may be improvedinto a purely prime decomposition, with no infinitely divisible component. Acknowledgements. I would like to thank my advisor, Hari Bercovici,for his help, his patience and his numerous suggestions. I would also like to JOHN D. WILLIAMS thank the referee for his thoughtful recommendations. References. [1] N. I. Akhiezer and I. M. Glazman. Theory of linear operators in Hilbert space . DoverPublications Inc., New York, 1993. Translated from the Russian and with a prefaceby Merlynd Nestell, Reprint of the 1961 and 1963 translations, Two volumes boundas one.[2] S. T. Belinschi. A note on regularity for free convolutions. Ann. Inst. H. Poincar´eProbab. Statist. , 42(5):635–648, 2006.[3] S. T. Belinschi. The Lebesgue decomposition of the free additive convolution of twoprobability distributions. Probab. Theory Related Fields , 142(1-2):125–150, 2008.[4] S. T. Belinschi and H. Bercovici. Atoms and regularity for measures in a partiallydefined free convolution semigroup. Math. Z. , 248(4):665–674, 2004.[5] S. T. Belinschi and H. Bercovici. Partially defined semigroups relative to multiplica-tive free convolution. Int. Math. Res. Not. , (2):65–101, 2005.[6] S. T. Belinschi and H. Bercovici. A new approach to subordination results in freeprobability. J. Anal. Math. , 101:357–365, 2007.[7] S. T. Belinschi and H. Bercovici. Hinˇcin’s theorem for multiplicative free convolution. Canad. Math. Bull. , 51(1):26–31, 2008.[8] H. Bercovici and V. Pata. A free analogue of Hinˇcin’s characterization of infinitedivisibility. Proc. Amer. Math. Soc. , 128(4):1011–1015, 2000.[9] H. Bercovici and D. Voiculescu. L´evy-Hinˇcin type theorems for multiplicative andadditive free convolution. Pacific J. Math. , 153(2):217–248, 1992.[10] H. Bercovici and D. Voiculescu. Free convolution of measures with unbounded sup-port. Indiana Univ. Math. J. , 42(3):733–773, 1993.[11] H. Bercovici and D. Voiculescu. Superconvergence to the central limit and failureof the Cram´er theorem for free random variables. Probab. Theory Related Fields ,103(2):215–222, 1995.[12] H. Bercovici and J.-C. Wang. On freely indecomposable measures. Indiana Univ.Math. J. , 57(6):2601–2610, 2008.[13] P. Biane. Processes with free increments. Math. Z. , 227(1):143–174, 1998.[14] G. Chistyakov and F. G¨otze. The Arithmetic of Distributions in Free ProbabilityTheory. ArXiv Mathematics e-prints , Aug. 2005.[15] Y. V. Linnik. Decomposition of probability distributions . Edited by S. J. Taylor.Dover Publications Inc., New York, 1964.[16] H. Maassen. Addition of freely independent random variables. J. Funct. Anal. ,106(2):409–438, 1992.[17] A. Nica and R. Speicher. On the multiplication of free N -tuples of noncommutativerandom variables. Amer. J. Math. , 118(4):799–837, 1996.[18] D. Voiculescu. Addition of certain noncommuting random variables. J. Funct. Anal. ,66(3):323–346, 1986.[19] D. Voiculescu. Multiplication of certain noncommuting random variables. J. OperatorTheory , 18(2):223–235, 1987.[20] D. Voiculescu. The analogues of entropy and of Fisher’s information measure in freeprobability theory. I. Comm. Math. Phys. , 155(1):71–92, 1993.[21] D. V. Voiculescu, K. J. Dykema, and A. Nica. Free random variables , volume 1 of