The Saxl Conjecture for Fourth Powers via the Semigroup Property
TThe Saxl Conjecture for Fourth Powers via the SemigroupProperty
Sammy Luo and Mark Sellke
Abstract
The tensor square conjecture states that for n ≥
10, there is an irreducible rep-resentation V of the symmetric group S n such that V ⊗ V contains every irreduciblerepresentation of S n . Our main result is that for large enough n , there exists an irre-ducible representation V such that V ⊗ contains every irreducible representation. Wealso show that tensor squares of certain irreducible representations contain (1 − o (1))-fraction of irreducible representations with respect to two natural probability distribu-tions. Our main tool is the semigroup property, which allows us to break partitionsdown into smaller ones. Contents a r X i v : . [ m a t h . C O ] J u l Detailed Proof of Theorem 1.4 25 n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.4 Replacing the Standard Representations . . . . . . . . . . . . . . . . . . . . 34 β -Sum Flexibility 36 A.1 Overview of the Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36A.2 Formulas for Poissonized Plancherel Measure . . . . . . . . . . . . . . . . . 38A.3 Proof of Theorem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
B Appendix: Generalization of Dominance Result 45C Appendix: The Generalized Semigroup Property and Tensor Cubes 47
C.1 The Semigroup Property for Many Partitions . . . . . . . . . . . . . . . . . 47C.2 Rectangles Appear in (cid:37) ⊗ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Introduction and Main Results
Much of the representation theory of the symmetric group S n is well-understood. Itsirreducible representations have known explicit descriptions. One poorly understood facet,however, is the decomposition of tensor products of its representations into irreducibles.This paper focuses on a conjecture related to these decompositions, which was introducedin [12]. Conjecture 1.1 (Tensor Square Conjecture) . For every n except , , there exists anirreducible representation V of the symmetric group S n such that the tensor square V ⊗ V contains every irreducible representation of S n as a summand with positive multiplicity. We first remark that for any faithful representation V of a finite group G , i.e. arepresentation such that each g ∈ G acts differently on V , there is some n such thatthe tensor power V ⊗ n contains every irreducible representation of G ([9] Ex. 2.37). Inthe case of G = S n , the standard representation of S n contains a faithful and irreduciblerepresentation V of dimension n −
1, so a sufficiently large tensor power V ⊗ k containsevery irreducible representation. However, the tensor square conjecture is a much strongerstatement than this because it requires such a small exponent.As is well known (e.g. [9]), there is an explicit correspondence between the set of ir-reducible representations of S n over C and the set P n of partitions λ of n , i.e. sequences λ = ( λ , λ , . . . ) of non-negative integers with λ ≥ λ ≥ . . . and (cid:80) i ≥ λ i = n . By associ-ating λ to the Young diagram with λ i boxes in row i , we may equivalently correspond eachirreducible representation of S n with a Young diagram with n boxes. This correspondenceallows the use of combinatorial tools in analyzing many aspects of the representation theoryof S n . Because these notions are equivalent for our purposes, we will freely denote by (e.g.) λ both the partition or Young diagram corresponding to λ and the associated irreduciblerepresentation of S n .In view of this correspondence, we may express the tensor square conjecture in termsof partitions. Conjecture 1.1 (Tensor Square Conjecture) . For every n except , , there exists apartition λ (cid:96) n such that the tensor square λ ⊗ λ contains every irreducible representationof S n as a summand with positive multiplicity. This conjecture can also be restated in terms of positivity of
Kronecker coefficients g νλµ ,the multiplicities of ν in the tensor product λ ⊗ µ : it asserts the existence of λ such that g νλλ > ν . Unlike many other coefficients arising in the representation theory of S n ,the Kronecker coefficients lack a known combinatorial interpretation. Indeed, finding onehas been said to be “one of the last major open problems in the ordinary representationtheory of the symmetric group” ([13]). The computation of Kronecker coefficients has beenshown to be computationally hard ([5]). 3n [12], Pak, Panova, and Vallejo studied the tensor square conjecture and suggested twofamilies of partitions which might satisfy the conjecture, the staircase and caret partitions.The conjecture that the staircase partition suffices is also known as the Saxl conjecture (see[12, 10]).
Definition 1.
For m ≥
1, the staircase partition (cid:37) m (cid:96) (cid:0) m +12 (cid:1) is (cid:37) m = ( m, m − , . . . , . Definition 2.
For m ≥
1, the caret partition γ m (cid:96) m is γ m = (3 m − , m − , m − , . . . , m + 3 , m + 1 , m, m − , m − , m − , m − , . . . , , . Conjecture 1.2 (Saxl Conjecture) . For n = (cid:0) m +12 (cid:1) , all partitions of n are contained in (cid:37) ⊗ m . Conjecture 1.3.
For n = 3 m , all partitions of n are contained in γ ⊗ m . Previous work made progress towards the tensor square conjecture and towards theSaxl conjecture in particular. Pak, Panova, and Vallejo used a lemma on nonzero charactervalues to show that the tensor square of the staircase contains all hooks, partitions withtwo rows, and some partitions with three rows or with two rows plus an extra column ([12]).Similar results were shown for the tensor square of the caret shape. They also showed thatthe staircase partition (cid:37) k contains at least 3 (cid:100) k/ (cid:101)− distinct partitions in its tensor square.This is noteworthy since the total number of partitions of n = k ( k +1)2 is also roughly onthe order of e ck for some c .Ikenmeyer [10] further generalized some of this progress by showing a result based oncomparability of partitions to the staircase in dominance order. This result, which we willuse heavily in our work, will be described in greater detail in Section 2.2.Although preliminary evidence suggests that there could in fact be many shapes λ thatsatisfy the tensor square conjecture for each n , several simple criteria are known. Forexample, λ ⊗ contains the alternating representation 1 n if and only if λ is identical to itsconjugate λ (cid:48) [12], which means that only symmetric λ can satisfy the full tensor squareconjecture.We obtain several results toward the tensor square conjecture. The primary result isthe following. Theorem 1.4.
For sufficiently large n , there exists λ (cid:96) n such that λ ⊗ contains allpartitions of n . The partitions λ we use are staircases (cid:37) m when n is a triangular number and slightadjustments of them when n is not.We prove Theorem 1.4 by combining two results that are of independent interest: Wedefine a simple metric on the set of partitions of n and show that the set of partitions4ppearing in λ ⊗ is dense in an appropriate sense. We also show that all partitions close tothe trivial representation are contained in λ ⊗ . Together, these results almost immediatelyimply that λ ⊗ contains all partitions of n (when n is sufficiently large).In addition, we prove some probabilistic results towards the Saxl conjecture. Theorem 1.5.
Let m ≥ . Then (cid:37) ⊗ m contains almost all partitions of (cid:0) m +12 (cid:1) in the uniformmeasure, in which all distinct partitions are given the same probability. Theorem 1.6.
Let m ≥ . Then (cid:37) ⊗ m contains almost all partitions of (cid:0) m +12 (cid:1) with respectto the Plancherel measure. Remark.
We use the phrases “almost all” and “with high probability” throughout thispaper to mean that a sequence of probabilities tends to 1 as the parameter n or m tendsto infinity. For example, Theorem 1.5 states that the probability p ( m ) that a uniformlyrandom partition λ (cid:96) (cid:0) m (cid:1) is contained in (cid:37) ⊗ m converges to 1 as m grows large. In particular,this language implies nothing about the rate of convergence.The Plancherel measure will be discussed in Section 3.1. Recall that irreducible representations of the symmetric group S n correspond to Youngdiagrams, or equivalently to partitions λ (cid:96) n of n . We will use Young diagrams and parti-tions interchangeably, and we denote by P n the set of partitions of n . We will use multipleorientations on Young diagrams: as is conventional, we call the coordinates depicted belowat left English , those in the middle
French , and those at right
Russian . We use 1 n to denotethe trivial representation, 1 n the alternating representation, and λ (cid:48) the conjugate of λ .English Coordinates French Coordinates Russian Coordinates Lemma 2.1 ([9, Section 4.1]) . Let λ (cid:96) n . Then λ (cid:48) = λ ⊗ n . For simplicity, we define an indicator function for constituency.5 efinition 3.
Let c ( λ, µ, ν ) be the statement that g νλ,µ > c ( λ, µ, ν ). Lemma 2.2.
The function c is symmetric in its three arguments.Proof. Let χ V : S n → C denote the character function for a representation V . We have g νλ,µ = (cid:104) χ λ χ µ , χ ν (cid:105) = (cid:104) χ λ , χ µ χ ν (cid:105) = g λµ,ν , and similarly for other permutations. Here we use the facts that all representations of S n have real characters and that χ V ⊗ W = χ V χ W . Lemma 2.3. If c ( λ, µ, ν ) then c ( λ (cid:48) , µ (cid:48) , ν ) .Proof. By Lemma 2.1 we have λ (cid:48) ⊗ µ (cid:48) = ( λ ⊗ n ) ⊗ ( µ ⊗ n ) = ( λ ⊗ µ ) ⊗ (1 n ⊗ n ) = λ ⊗ µ ,so the result follows by the definition of the Kronecker coefficient.There is one more special representation that we will use repeatedly. Definition 4.
The standard representation of S n is τ n = Ind S n S n − (1). Equivalently, τ n isthe n -dimensional representation in which S n acts by permuting n basis vectors v . . . v n in the usual way.It is well known that τ n is the sum of the irreducible representations corresponding tothe partitions ( n ) , ( n − , v + · · · + v n ),which all elements of S n fix. It is easy to see from the above description that the remainingpart corresponding to ( n − ,
1) is a faithful representation, as claimed above.It is known that for an irreducible representation λ , the tensor product τ n ⊗ λ is theformal sum (with multiplicity) of all partitions which can be formed by moving a singlesquare in the Young diagram for λ (including λ itself). (Pieri’s Rule, [9] Ex. 4.44). Thisfact will be used extensively later in the paper.Finally, we recall the definition of the Durfee square. Definition 5.
The
Durfee length d ( λ ) of a partition λ is the largest integer r with λ r ≥ r . Definition 6.
The
Durfee square of a partition λ is the square of side length d ( λ ) withthe principal diagonal as its diagonal (beginning in the upper-left corner in English coor-dinates), considered as a subset of λ in the plane.6 .2 The Semigroup Property and Dominance Ordering We extensively use the semigroup property , which was proved in [6]. To state this, we firstdefine the horizontal sum of partitions, in which we add row lengths, or equivalently takethe disjoint union of the multisets of column lengths.
Definition 7.
The horizontal sum λ + H λ of partions λ = ( λ , λ , . . . ) (cid:96) n and µ =( µ , µ . . . ) (cid:96) n is the partition ( λ + µ , λ + µ , . . . ) (cid:96) ( n + n ).We also define horizontal scalar multiplication by positive integers on partitions, simplyby repeated addition. Definition 8.
For k ≥
0, define the horizontal scalar multiple k H λ by k H λ = λ + H λ + H . . . + H λ , where we add k copies of λ .We also define vertical addition and scalar multiplication analogously, by adding columnlengths instead. Definition 9.
We define the vertical sum λ + V λ of λ , λ to be ( λ (cid:48) + H λ (cid:48) ) (cid:48) Definition 10.
Define the vertical scalar multiple k V λ by k V λ = λ + V λ + V . . . λ wherewe add k copies of λ .We now state the semigroup property . Theorem 2.4 (Semigroup Property, [6, Theorem 3.1]) . If c ( λ , λ , λ ) and c ( µ , µ , µ ) then c ( λ + H µ , λ + H µ , λ + H µ ) . We can use induction on Theorem 2.4 to extend the semigroup property to arbitrarynumbers of partitions. However, we will not need this for the bulk of our paper, so we referthe reader to Appendix C.2.We now give a modified version of 2.4 using vertical sums.
Corollary 2.5. If c ( λ , λ , λ ) and c ( µ , µ , µ ) , then c ( λ + V µ , λ + V µ , λ + H µ ) . Proof.
By Lemma 2.3 we have c ( λ (cid:48) , λ (cid:48) , λ ) and c ( µ (cid:48) , µ (cid:48) , µ ). Then the semigroup propertyyields c ( λ (cid:48) + H µ (cid:48) , λ (cid:48) + H µ (cid:48) , λ + H µ ). Applying Lemma 2.3 again yields the result.In other words, in using the semigroup property we are allowed to use an even numberof vertical additions in each step. It is not true that vertically adding all 3 partitionspreserves constituency. For example, we have c ((1) , (1) , (1)) for the trivial representationsof S , but vertically adding this to itself gives that the alternating representation of S iscontained in its own tensor square. This tensor square is just the trivial representationwhich, of course, does not contain the alternating representation.We will also extensively use the following result from [10]. First we recall the notion ofdominance ordering, which gives a partial ordering on partitions of n .7 efinition 11. Let λ, µ (cid:96) n . We say that λ dominates µ (or λ (cid:23) µ ) if for all k ≥ (cid:80) ki =1 λ i ≥ (cid:80) ki =1 µ i . Theorem 2.6 ([10, Thm 2.1]) . Let m ≥ . Then (cid:37) m ⊗ (cid:37) m contains all partitions λ whichare dominance-comparable to (cid:37) m . Remark.
We have also generalized Theorem 2.6 to arbitrary partitions with distinct rowlengths (see Theorem B.1), which seems potentially useful for extending the applicabilityof the semigroup property.The main method used throughout the paper will be to try to express triples ( (cid:37) k , (cid:37) k , λ )as sums of smaller triples, each of which satisfies constituency because of Theorem 2.6, andthen conclude c ( (cid:37) k , (cid:37) k , λ ) via the semigroup property. This method is powerful becausesmall staircases can be added together to form larger staircases. This will be explainedthroughout the following sections, which contain overviews of the proofs of our main results. In this section, we address a probabilistic weakening of the tensor square conjecture:
Question.
For large m , what is the probability that a random partition of (cid:0) m +12 (cid:1) is aconstituent in the tensor square (cid:37) ⊗ m ?To answer this question, we must first put a probability distribution on the set P n ofpartitions of n . The most obvious choice is the uniform distribution. Definition 12.
The uniform measure U n assigns probability | P n | to each distinct partitionof n .There is another natural family of probability distributions on P n we investigate, whichis rooted in representation theory. Definition 13.
The
Plancherel measure M n assigns to each λ (cid:96) n probability dim( λ ) n ! .Here dim( λ ) is the dimension of λ as a representation of S n .The value dim( λ ) has the following famous combinatorial interpretation: it is the num-ber of standard Young tableau of shape λ , i.e. bijective assignments of (1 , , . . . , n ) to theboxes of λ such that the numbers increase along each row and column [14].8 .2 Limit Shapes of Partition Measures The uniform and Plancherel measures U n and M n give rise to different smooth limit shapesfor large n ; in each case, we may speak of the “typical shape” of a large random partition.Given a Young diagram λ of size n , we may shrink it by a linear-scale factor of √ √ n so thatit has area 2, and rotate it into Russian (diagonal) coordinates. This results in the graphof a function f λ , and by defining f λ ( x ) = | x | past the boundary of the Young diagram, weget a function defined on the whole real line which satisfies f λ ( x ) ≥ | x | , (1)and (cid:90) ∞−∞ ( f λ ( x ) − | x | ) dx = 2 , (2)and | f λ ( x ) − f λ ( y ) | ≤ | x − y | . (3) Definition 14.
The set CY of continuous Young diagrams is the set of functions R → R satisfying (1) , (2) , and (3).We also define a family SY of continuous Young diagrams in French coordinates, thistime with area 1. Given f , f ∈ CY , we change them into F , F in French coordinates bysimply rotating and then reflecting, and also dilating by a linear scale factor of √ . Takingthe c`adl`ag version, we obtain a non-increasing function (0 , ∞ ) → [0 , ∞ ). Definition 15.
Let SY be the set of functions (0 , ∞ ) → [0 , ∞ ) which are non-increasing,c`adl`ag, and have total integral 1. Definition 16.
For f ∈ CY , the straightening F = S ( f ) ∈ SY is the right-continuousfunction R + → R + given by rotation to English coordinates of the graph of f followed byreflection over the x-axis, and then dilation by a factor of √ . Definition 17.
For λ (cid:96) n , define F λ = S ( f λ ).On CY we will use the supremum norm d ( f, g ) = || f − g || ∞ . We define the metric on SY to agree with d under the canonical bijection S . This makes d on SY simply twice theL´evy metric ([3]).Using the canonical map P n → CY given by λ (cid:55)→ f λ , we may pushforward the measures M n , U n to finitely supported measures on CY . This allows us to state the limit shaperesults precisely. Theorem 3.1 ([16]) . The pushforwards of the measures M n converge in probability to thedelta measure on the function f M ( x ) = (cid:26) ( π )( x arcsin( x ) + √ − x ) : x ≤ , | x | : | x | ≥ . heorem 3.2 ([15]) . The pushforwards of the measures U n converge in probability to thedelta measure on the function f U ( x ) = 1 b log( e − xb + e xb ) where b = π √ . So under both uniform and Plancherel measure, for large n , almost all Young diagramslook like a smooth limiting curve if we zoom out to a constant-size scale. The limit shapesfor the Plancherel and uniform measures are (respectively) shown below.Plancherel Limit Shape Uniform Limit ShapeThe limit shape of U n may be more elegantly described in French coordinates, whereit is the graph of e − πx √ + e − πy √ = 1.We will extend the definitions of length and height of Young diagrams to the continuousYoung diagrams. For ordinary Young diagrams λ (cid:96) n , we have that the length of λ is λ = √ n sup( { x | f λ ( x ) = − x } ) and the height is λ (cid:48) = √ n inf( { x | f λ ( x ) = x } ), given by thelowest spots on the rotated axes that the functions meet. Defining the normalized lengthand height by (cid:96) ( λ ) := λ √ n , h ( λ ) := λ (cid:48) √ n , we may extend the rescaled length and height to CY and hence also to SY : Definition 18.
For f ∈ CY , define its (possibly infinite) length and height as (cid:96) ( f ) = − inf( { x | f ( x ) = − x } ) ,h ( f ) = sup( { x | f ( x ) = x } ) . Definition 19.
For F ∈ SY , let (cid:96) ( F ) = inf( { x | F ( x ) = 0 } ). Definition 20.
For F ∈ SY , let h ( F ) = lim t → + F ( t ).Note that these definitions respect the bijection S because of the area difference betweenthe types of continuous Young diagrams.We define C f to be the region between f ∈ CY and the graph of y = | x | , and S F thecorresponding notion for F ∈ SY . 10 efinition 21. For f ∈ CY , let C f = { ( x, y ) | f ( x ) ≥ y ≥ | x |} . Definition 22.
For F ∈ SY , let S F = { ( x, y ) ∈ ( R + ) | y ≤ F ( x ) } .We now extend the dominance ordering to continuous Young diagrams. Definition 23.
For F ∈ SY , define H F ( t ) = (cid:82) t F ( x ) dx . Definition 24.
For F , F ∈ SY we say that F (cid:23) F if they satisfy H F ( t ) ≤ H F ( t )for all t . Definition 25.
For f , f ∈ CY we say that f (cid:23) f if S ( f ) (cid:23) S ( f ).It is clear that these dominance orders are extensions of the ordinary dominance order.That is, if λ, µ (cid:96) n then λ (cid:23) µ if and only if f λ (cid:23) f µ .We also define (cid:37) ∞ . Definition 26.
Let (cid:37) ∞ be the continuous Young diagram which is an isosceles right tri-angle, so lim k →∞ (cid:37) k = (cid:37) ∞ . We now establish a few simple lemmas on the geometry of continuous Young diagrams.
Definition 27.
For A a subset of R , let A ε = { x ∈ R | d ( x, A ) ≤ ε } and A ε = (( A C ) ε ) C ,where A C is the complement of A . Definition 28.
For A ⊆ R , we denote by m ( A ) the Lebesgue measure of A (assuming itexists). Lemma 3.3.
The functions (cid:96), h : CY → R are lower semi-continuous with respect to thenorm d .Proof. We show the result for h , the other case being exactly the same. Suppose h ( f ) > A .We show that for g sufficiently close to f , we have h ( g ) > A . Indeed, for some x > A wehave f ( x ) − | x | = c >
0. For | f − g | < c , we have g ( x ) − | x | >
0, implying h ( g ) > A asdesired. Lemma 3.4.
For fixed F ∈ SF , if S F is a bounded subset of the plane then we have (cid:18) lim ε → + m (( S F ) ε ) (cid:19) = (cid:18) lim ε → + m (( S F ) ε ) (cid:19) = m ( S F ) = 1 . roof. It is clear that we have m ( S F ) = 1. We proceed with the remaining claims.We note that the closure S F and the interior ◦ S F have equal measure. Indeed, S F − ( ◦ S F )consists of a countable number of vertical lines (corresponding to the discontinuities of F )and the graph of F , each of which has measure 0 by Fubini’s theorem. Now, as ε approaches0, the sets ( S F ) (cid:15) intersect to S F . Since S F is bounded, these sets converge in measure to S F by the dominated convergence theorem. Similarly, as ε approaches 0 the sets ( S F ) (cid:15) converge in measure to ◦ S F . Since we have m ( S F ) = m ( S F ) = m ( ◦ S F ), the lemma follows. Lemma 3.5.
Given f , f ∈ CY , assume that f ( x ) = f ( x ) > | x | has at most 1 solutionin x and that (cid:96) ( f ) > (cid:96) ( f ) , h ( f ) < h ( f ) . Then f ( x ) = f ( x ) > | x | has a unique solutionand f (cid:23) f . Moreover, there exists ε > such that the following holds: for all continuousYoung diagrams g , g with || g i − f i || ∞ ≤ ε for i ∈ { , } and also | (cid:96) ( f ) − (cid:96) ( g ) | ≤ ε and | h ( f ) − h ( g ) | ≤ ε , we have g (cid:23) g .Proof of Lemma 3.5. The following pictures will probably be helpful aids for understandingthe proof. They correspond to Russian and French coordinates, respectively, with the bluecurve as f and F = S ( f ) and the green line as f and F = S ( f ).Continuous Young Diagrams in English Co-ordinates Continuous Young Diagrams in RussianCoordinatesFirst we establish that f ( x ) = f ( x ) > | x | has a solution. This is simple: we have f ( − (cid:96) ( f )) > f ( − (cid:96) ( f )) = (cid:96) ( f ) and f ( h ( f )) >f ( h ( f )) = h ( f )), so we may apply the intermediate value theorem to find a real number c ∈ ( − (cid:96) ( f ) , h ( f )) for which f ( c ) = f ( c ). By the Lipschitz condition on CY , c clearlysatisfies f ( c ) = f ( c ) > | c | . Consider the continuous function f ( x ) = f ( x ) − f ( x ). We have that the equation12 ( x ) = f ( x ) > | x | has a unique solution c , and also that { x | f ( x ) = f ( x ) = | x |} = ( −∞ , − (cid:96) ( f )) ∪ ( h ( f ) , ∞ ) . Therefore, f − (0) = ( −∞ , − (cid:96) ( f )) ∪ { c } ∪ ( h ( f ) , ∞ ). We have again that f ( − (cid:96) ( f )) >f ( − (cid:96) ( f )) = (cid:96) ( f ) and f ( h ( f )) ≥ f ( h ( f )) = h ( f ), which yield f ( − (cid:96) ( f )) ≥ f ( h ( f )) ≤
0. By continuity, we therefore see that f ( x ) > x ∈ ( − (cid:96) ( f ) , c ) and f ( x ) < x ∈ ( c, h ( f )).Let us straighten this picture into French coordinates, with F = S ( f ) , F = S ( f )and ( c, f ( c )) becoming ( k, F ( k )). Then our last deduction translates into the fact that F − F is positive on (0 , k ), 0 at k , negative on [ k, (cid:96) ( f )) and 0 on [ (cid:96) ( f ) , ∞ ). Since bothfunctions have integral 1, is it clear to see that the function I ( t ) = H F ( t ) − H F ( t ) is 0 at0, positive on (0 , (cid:96) ( f )) and 0 on [ (cid:96) ( f ) , ∞ ). Therefore f (cid:23) f .We now establish g (cid:23) g for g i satisfying the conditions in the lemma statement forsmall ε . Define G , G ∈ SY as S ( g ) , S ( g ), and J ( t ) = H G ( t ) − H G ( t ). We show J ( t ) ≥ t , when ε is small enough. First, the condition | (cid:96) ( f ) − (cid:96) ( g ) | ≤ ε combinedwith 3 . J ( t ) is positive near (cid:96) ( f ), because for small enough ε we have (cid:96) ( G ) > (cid:96) ( G ). Similarly, the condition | h ( f ) − h ( g ) | ≤ ε combined with 3 . G are larger than than those of G , i.e. that G ( x ) − G ( x ) > x small. So for ε small, J ( t ) is positive near 0.Now we need only show that J ( t ) ≥ t ∈ [ α, (cid:96) ( f ) − α ] for some α >
0. Note thatbecause I ( t ) > α, (cid:96) ( f ) − α ], by compactness there exists δ > I ( t ) > δ for t ∈ [ α, (cid:96) ( F ) − α ].Now, we claim that ( S F ) ε √ ⊆ S G . Indeed, if ( x, y ) ∈ ( S F ) ε √ then( x + ε, y + ε ) ∈ S F ⇐⇒ y + ε ≤ F ( x + ε ) = ⇒ y ≤ G ( x ) , as desired. Therefore, by Lemma 3 .
4, picking ε small forces S F , S G to be arbitrarily closein measure. We have | J ( t ) − I ( t ) | ≤ m (( S F ∆ S G ) ∩ ([0 , t ] × R )) ≤ m ( S F ∩ S G ) , so by picking ε small we force | J ( t ) − I ( t ) | to be uniformly small. Since I ( t ) is boundedaway from 0 on the interval [ α, (cid:96) ( F ) − α ], for small ε we have J ( t ) > J ( t ) > t ∈ R , concluding the proof. Lemma 3.6.
For F ∈ SY let H F ( t ) = (cid:82) t F ( t ) dt . For every ε > there exists δ such thethe following holds: if G ∈ SY satisfies d ( F, G ) ≤ δ then | H F − H G | ∞ ≤ (cid:15) .Proof of Lemma 3.6. Fix ε as in the statement. By exhaustion take a, b such that (cid:82) ba F ( t ) dt ≥ − ε . Now pick δ such that if d ( F, G ) ≤ δ then | F ( x ) − G ( x ) | ≤ ε b − a ) for x ∈ [ a, b ].Then (cid:82) ba G ( t ) dt ≥ − ε and so (cid:82) a G ( t ) dt ≤ (cid:15) , (cid:82) ∞ b G ( t ) dt ≤ (cid:15) because (cid:82) ∞ G ( t ) dt = 1.13herefore, for x (cid:54)∈ [ a, b ] the desired | H F ( x ) − H G ( x ) | ≤ (cid:15) holds. For x ∈ [ a, b ] we have | H F ( x ) − H G ( x ) | ≤ | H F ( a ) − H G ( a ) | + | (cid:82) xa F ( x ) − G ( x ) | ≤ ε + ε = (cid:15) . For both U n and M n , we know now the overall shape of a generic partition λ (cid:96) n . Ourstrategy is the following: we will decompose these partitions into pieces, handle each pieceusing Theorem 2.6 on dominance, and combine these pieces using the semigroup property.We will need a way to combine smaller staircases into larger ones. For this section, we willuse the following identities. Proposition 3.7.
We have the identities ( (cid:37) k + H (cid:37) k − ) + V ( (cid:37) k + H (cid:37) k ) = (cid:37) k , ( (cid:37) k +1 + H (cid:37) k ) + V ( (cid:37) k + H (cid:37) k ) = (cid:37) k +1 . Visual depictions for both cases when k = 2 are below.Decomposition of (cid:37) . Deomposition of (cid:37) .This means that we can break a large staircase of size n into four staircases of sizeroughly n . Supposing for convenience that m = 2 k is even, by the above proposition, toshow c ( (cid:37) m , (cid:37) m , λ ) for some λ , it suffices to write λ as λ = λ + H λ + H λ + H λ where c ( (cid:37) k − , (cid:37) k − , λ ) and c ( (cid:37) k , (cid:37) k , λ i ) for i ∈ { , , } . For in such a case, the semigroupproperty gives c ( (cid:37) k − + H (cid:37) k , (cid:37) k − + H (cid:37) k , λ + H λ ) , c ( (cid:37) k + H (cid:37) k , (cid:37) k + H (cid:37) k , λ + H λ )= ⇒ c (( (cid:37) k − + H (cid:37) k ) + V ( (cid:37) k + H (cid:37) k ) , ( (cid:37) k − + H (cid:37) k ) + V ( (cid:37) k + H (cid:37) k ) , ( λ + H λ ) + H ( λ + H λ ))14 ⇒ c ( (cid:37) k , (cid:37) k , λ ) . Perhaps surprisingly, this method suffices to show constituency in (cid:37) ⊗ m for almost allpartitions in both the uniform and Plancherel cases. In each case, we can break the limitshape into four pieces of approximately equal area. For the uniform shape, we use verticalcuts as depicted at right. For the Plancherel limit shape, we evenly distribute the columnsamong the four smaller pieces, so that each smaller piece has approximately the sameshape.In each case, the dominance comparability of the smaller pieces with a staircase par-tition follows by geometric reasoning on the limit shapes. What is a bit more subtle isensuring that these decompositions can be done precisely: to apply the semigroup prop-erty, each of our four pieces must have size exactly equal to that of a certain staircasepartition. For this point, our strategy is to divide almost all of the large partition into ourfour pieces but reserve some very short columns for the end of the process. We distributethese short columns to the pieces such that each piece has exactly the correct size, withoutaffecting their overall shapes significantly. To justify all of these details requires a bit ofcare, and we do this in the following section. Theorem 1.5.
Let m ≥ . Then (cid:37) ⊗ m contains almost all partitions of (cid:0) m +12 (cid:1) in the uniformmeasure, in which all distinct partitions are given the same probability.Proof. Our plan is to use the semigroup property to add together smaller cases that canbe proven by dominance ordering.We use Proposition 5.4 in the k = 2 case, to give the following: Suppose we havepartitions λ , λ , λ , λ which are dominance comparable to (cid:37) (cid:98) m +12 (cid:99) , (cid:37) (cid:98) m (cid:99) , (cid:37) (cid:98) m (cid:99) , (cid:37) (cid:98) m − (cid:99) , re-spectively. Repeated application of the semigroup property yields c (cid:16) (cid:37) (cid:98) m +12 (cid:99) + H (cid:37) (cid:98) m (cid:99) , (cid:37) (cid:98) m +12 (cid:99) + H (cid:37) (cid:98) m (cid:99) , λ + H λ (cid:17) , c (cid:16) (cid:37) (cid:98) m (cid:99) + H (cid:37) (cid:98) m − (cid:99) , (cid:37) (cid:98) m (cid:99) + H (cid:37) (cid:98) m − (cid:99) , λ + H λ (cid:17) and hence c ( (cid:37) m , (cid:37) m , λ + H λ + H λ + H λ ).It now suffices to take a U n -typical partition λ and show that it can be written as λ = λ + H λ + H λ + H λ , where λ , λ , λ , λ are dominance-comparable to the correspondingly sized staircases.We will do so by partitioning the columns into four sets corresponding to λ , · · · , λ such that each is dominance-comparable to (cid:37) (cid:98) m +12 (cid:99) or (cid:37) (cid:98) m − (cid:99) . We give an algorithm thatworks for almost all partitions. On a macroscopic scale, we essentially will split the limit15hape into 4 pieces of equal area by cutting it up vertically, as depicted below. We willthen use some columns of length 1 to ensure that each λ i has exactly the correct total size,without interfering significantly with their overall shapes. We will denote by µ , . . . , µ the 4 straightened continuous Young diagrams formed from the limit shape in this way. Afigure below depicts the decomposition.Decomposition of the Uniform Limit Shape by Column SizeHere the purple diagonal line represents the rescaled staircase partition (cid:37) (cid:98) m (cid:99) , shiftedright so that its corner is at the bottom of the green line (the line separating µ from µ .We now briefly explain why the hypotheses of Lemma 3.5 hold for each piece against a -area (cid:37) ∞ . The exact equation for the curve of the limit shape boundary is e − π √ x + e − π √ y = 1 . A convexity argument shows that on this curve, x + y is minimized at x = y = √ π log 2.Therefore, to ensure that the purple line does not intersect the curve, we need to verify that16he x-coordinate for the green line is less than ( √ π log 2) − √ ≈ .
37. Indeed, the greenx-coordinate can easily be checked to be approximately 0 .
33, which is smaller as desired.Therefore, the boundaries of (cid:37) ∞ and µ intersect only once, including at the endpoints, soLemma 3.5 ensures dominance comparability. It is clear that dominance of µ implies thesame for µ , µ . For µ , it is easy to check that the red line meets the blue boundary curveat approximately (0 . , . √ ≈ . (cid:37) ∞ has largerheight than µ . So it remains to check that the boundary curves of µ , (cid:37) ∞ intersect onlyat most 1 time. This is true because the boundary of µ consists only of points ( x, y ) with x > y , and it is easily seen that the derivative of the boundary curve lies in the interval( − ,
0) on this region, so it cannot intersect a line of slope -1 twice.We will now decompose λ by greedily partitioning the columns into four sets corre-sponding to λ , · · · , λ as follows. We fix some small ε > n to be large. Order the column lengths λ t ≥ λ t ≥ · · · , and let n be the largestinteger such that (cid:80) n i =1 λ ti ≤ (cid:0) k +12 (cid:1) , and assign to λ the columns λ t . . . λ tn . By Lemma 3 . n is, with high probability, very close to the first verticalline in the diagram. Because it is also true w.h.p. that λ is close to the limit shape in themetric on SY , it is true w.h.p that no 2 subsequent adjacent columns λ tm , λ tm +1 ( m > n )differ by more than ε √ n . This is simply because to the right of the orange line, the limitshape graph is uniformly continuous.Therefore, w.h.p. we may add one more column λ tm to λ so that 0 ≤ (cid:0) k +12 (cid:1) −| λ | ≤ ε √ n ,just by taking the largest column size λ tm preserving the property 0 ≤ (cid:0) k +12 (cid:1) − | λ | .We similarly form λ , λ by greedily adding columns, and then (if necessary) addingone more column to give total size in the interval [ (cid:0) k +12 (cid:1) − iε √ n, (cid:0) k +12 (cid:1) ] (We have a factor of i ∈ { , } because the fact that we used out-of-place columns for previous λ j could increasethe gap-size between available columns for subsequent λ i .) We then greedily fill λ withthe remaining columns which are not of size 1. Note that n and n correspond almostexactly to the vertical boundaries between µ i , again by Lemma 3.6.We now use the pieces of size 1 to smooth the exact sizes of λ i . This is possible becauseof the following proposition. Proposition 4.1 ([7, Thm 2.1]) . Let X ( λ ) be the number of columns of size containedin a partition λ . For each v ≥ , if λ is a uniformly random partition of n , lim n →∞ P (cid:18) π √ n X ≤ v (cid:19) = 1 − e − v . Because of this proposition, by picking ε small, we have w.h.p. at least 6 ε √ n columns ofsize 1, enough to distribute to λ , λ , λ in order to give them the correct sizes. We checkedearlier that the condition for Lemma 3.5 was satisfied for each µ i , so by also picking ε smallenough so as to use Lemma 3.5, we also have that dominance is preserved when we do this,so λ , λ , λ are taken care of. 17e now just add all remaining parts of size 1 to λ . Each other λ i is the correct totalsize, so having used all the columns of λ , we see that λ is also the correct size. Because µ dominates the infinite staircase (cid:37) ∞ , by Lemma 3.5 again, with ε sufficiently small thedominance will be preserved. (Note that because λ dominates the corresponding staircaseinstead of being dominated as was the case for the other λ i , we don’t need to worry aboutadding too many singleton columns to λ .) Theorem 1.6.
Let m ≥ . Then (cid:37) ⊗ m contains almost all partitions of (cid:0) m +12 (cid:1) with respectto the Plancherel measure.Proof of Theorem 1.6. For this proof, we assume the following technical result, which willbe proven in a later section.
Definition 29.
Let β be a positive real number. A partition λ of n is called β -sum-flexible if it satisfies the following property: when its column lengths are sorted in increasing order a ≤ . . . a m , we have a = 1, and for all 1 ≤ k ≤ m we have a k ≤ (cid:38) β k (cid:80) j =1 a j (cid:39) . Theorem 4.2.
For any β > , let P ( n, β ) denote the probability that a (Plancherel)random partition of n is β -sum flexible. Then we have lim n →∞ P ( n, β ) = 1 for all β . To show Theorem 1.6, we use the same summation identities for staircase partitions asabove. Again, for simplicity, we discuss only the n = 2 k case, the other case being nearlyidentical.Similarly to the uniform case, we will take a M n -typical partition λ and write it as λ = λ + H λ + H λ + H λ , where λ , λ , λ are dominance-comparable to (cid:37) k and λ is comparable to (cid:37) k − . Unlike inthe above, we can make each piece roughly the same shape. We will do so by grouping mostof the column lengths λ (cid:48) ≥ λ (cid:48) ≥ . . . into sets of four consecutive sizes, and dividing themcyclically among λ i . We will use Theorem 4.2 to distribute the smallest columns, in sucha way that | λ | = λ | = | λ | = (cid:0) k +12 (cid:1) and | λ | = (cid:0) k (cid:1) . Similarly to the uniform case, define µ to be the Plancherel limit shape, but transformed by the linear map ( x, y ) → ( x , y ).We first argue that µ, (cid:37) ∞ satisfy the hypotheses of Lemma 3.5. We have (cid:96) ( (cid:37) ∞ ) = h ( (cid:37) ∞ ) = 1 , (cid:96) ( µ ) = √ , h ( µ ) = 2 √
2, so it remains to check that their boundaries intersectonly once off of the axes. In Russian coordinates, extend the boundary of µ along the axes18o a convex function f defined on all of R by setting f ( x ) = | x | outside the boundary curveof µ . Since f is convex, it can only meet any line twice. The boundary of (cid:37) ∞ is a line, andone intersection point is (1 , µ, (cid:37) ∞ intersect at most once,so they do meet the conditions of Lemma 3.5.Fix a miniscule ε >
0. We order the column lengths λ t ≥ λ t ≥ . . . . Take all columnswith size at least εn / and add column λ ti to partition k if i and k are congruent modulo 4.Because each partition’s total size corresponds essentially to a Riemann integral of the limitshape, each partition now has size n ( − δ + o (1)), where δ is the area of the limit shapecovered by columns are size less than (cid:15) after rescaling [1]. This means that each partialpartition is smaller than the size of the corresponding λ i , which all have size n ( − o (1)). ByTheorem 4.2, we may distribute the remaining columns to give each partition the correctoverall size. We claim that these remaining small parts occupy an infinitesimal area in therescaled diagram: they fit into a square of arbitrary small side length, for ε sufficientlysmall. To verify this, we need that the longest row of λ is of size (2 + o (1)) √ n with highprobability, i.e. the same length as predicted by the length of the limit shape. This isindeed true, see [1].This implies that the resulting shapes are very close to µ in the metric on SY , meaningthat, assuming we picked a sufficiently small ε value originally, we may apply Lemma 3.5to conclude that we have dominance.In summary, we have decomposed a generic large partition as a horizontal sum of 4almost-equally-shaped partitions, each comparable to their size of staircase via the domi-nance ordering. Hence, this generic large partition is a constituent of (cid:37) ⊗ m , by the semigroupproperty, and so we have established Theorem 1.6. In this section, we consider a different weakening of the tensor square conjecture:
Question.
What is the smallest integer f ( m ) such that (cid:37) ⊗ f ( m ) m contains every partition of n = m ( m +1)2 ?The conjecture is that f ( m ) = 2; we seek here to find any good upper bound on f ( m ).It may seem natural to attempt to use the semigroup property directly for this question;we can in fact define higher-length Kronecker coefficients via constituency in longer tensorproducts, and the semigroup property still holds for these longer sequences in the sameway (see Appendix C.2). However, we take a different approach, by instead allowing foradditional factors of the standard representation τ n , and then replacing these factors withfactors of (cid:37) m to answer the above question. 19 .1 Blockwise Distances The square-moving interpretation of tensoring a representation with τ n motivates the fol-lowing definition: Definition 30.
The blockwise distance ∆( λ, µ ) between two partitions λ, µ of n is thesmallest number of single blocks that need to be moved to form λ from µ .Equivalently, the blockwise distance is the smallest ∆ such that λ ⊗ τ ⊗ ∆ n contains µ .To motivate our approach in this section, consider the graph of Young diagrams of size n in which λ and µ are adjacent if and only if they differ by the movement of exactly 1 block.Then λ ⊗ τ n is the formal sum of λ (possibly many times) and all of its neighbors (onceeach). If we are only concerned (as we are) with which representations appear at all, wesee that tensoring with τ n corresponds simply to “spreading out” along this graph. Thisleads us to another weakening of the tensor square conjecture: Definition 31.
Define H ( m ) to be the minimum non-negative integer (cid:96) such that (cid:37) ⊗ m ⊗ τ ⊗ (cid:96)n contains all partitions of n as constituents, where n = m ( m +1)2 . Question.
How quickly does H ( m ) grow with m ?If H ( m ) is small, then partitions contained in (cid:37) ⊗ m are “dense” in the graph describedabove. In this section we find an upper bound for the number of standard representationfactors required, and then translate this into a bound f ( m ). We also define a slightly moregeneral distance which allows differently sized partitions to be compared. Definition 32.
The generalized blockwise distance ∆( λ, µ ) between two partitions λ (cid:96) n , µ (cid:96) n is the smallest number of single blocks that need to be added, removed, and/ormoved to form λ from µ . Remark.
When λ, µ are partitions of the same n , the generalized distance ∆( λ, µ ) is thesame as the regular blockwise distance.First we verify that horizontally adding partitions interacts simply with blockwise dis-tances. Proposition 5.1. If λ , µ , λ , µ are integer partitions then ∆( λ , λ ) + ∆( µ , µ ) ≥ ∆( λ + H µ , λ + H µ ) .Proof. Consider a sequence of d operations on the columns of λ that can be done to resultin λ , and consider the analogous sequence for transforming µ to µ . Since horizontaladdition simply combines the multisets of column sizes, it suffices to show that the sametwo sequences of moves can be performed separately on the sets of columns of λ + H µ corresponding to λ and µ , respectively, to achieve the same transformation as before foreach of the two parts. 20his follows because the basic operations, when performed on the columns of λ + H µ corresponding to λ , do not affect those corresponding to µ , and vice versa; when therelative sizes of two columns would change, we can simply reorder the columns withoutaffecting the result.Note that ∆( λ, µ ) ≤ n − λ, µ (cid:96) n . As a demonstration of what using standardrepresentations can give us, we give the following pair of weak but instructive results: Proposition 5.2.
Let n = m ( m +1)2 . If λ (cid:96) n is such that λ is contained in τ ⊗ mn , then λ iscontained in (cid:37) ⊗ m .Proof. The irreducible components of τ ⊗ mn are precisely those that correspond to partitions λ of n whose blockwise distance from 1 n is at most m . If this blockwise distance is at most m −
1, then λ , minus its top row, is contained within (cid:37) m − , and so λ is comparable to (cid:37) m in dominance order. Thus we get c ( (cid:37) m , (cid:37) m , λ ).If the distance is m , the only way for λ not to be comparable to (cid:37) m is if λ = ( n − m, , , · · · , c ( (cid:37) m , (cid:37) m , λ ) anyway by the semigroup property,because λ is a hook ([10]). Proposition 5.3.
Let n = m ( m +1)2 . All λ (cid:96) n are contained in (cid:37) ⊗ (cid:100) m +12 (cid:101) m .Proof. All λ (cid:96) n are contained in τ ⊗ n − n and thus in τ ⊗ m (cid:100) m +12 (cid:101) n . So for each λ , there existirreducible representations µ , · · · , µ (cid:100) m +12 (cid:101) , each contained in τ ⊗ mn , whose tensor productcontains λ . Thus λ is in the tensor product of (cid:100) m +12 (cid:101) irreducible representations eachcontained in (cid:37) ⊗ m , and thus λ is contained in (cid:37) ⊗ (cid:100) m +12 (cid:101) m Remark.
This already shows that f ( n ) = O ( √ n ). As we will soon see, we can do muchbetter. As seen previously, staircases can be added to form larger staircases. The most straight-forward way follows, and is a generalization of the identities used in the previous section(which we recover when k = 2). We use the symbols Σ H , Σ V for iterated applications of+ H , + V . Proposition 5.4.
For any n, k we have: (cid:37) n = k − Σ V j =0 (cid:32) k − Σ H i =0 (cid:16) (cid:37) (cid:98) n + i − jk (cid:99) (cid:17)(cid:33) . roof. Recall Hermite’s identity, which states that for any x ∈ R we have k − (cid:88) i =0 (cid:22) x + ik (cid:23) = (cid:98) kx (cid:99) . Thus, the j th term in the vertical sum is a partition whose largest row has size k − (cid:88) i =0 (cid:22) n + i − jk (cid:23) = n − j. Since the k summands are all staircases with length differing by at most one, the otherrows in the sum have size n − j − k, n − j − k, · · · . Thus the vertical sum creates a partitionwith row sizes n, n − , · · · ,
1, which is what we want.For example, for k = 2 we recover the identities we used in the previous section:( (cid:37) k + H (cid:37) k − ) + V ( (cid:37) k + H (cid:37) k ) = (cid:37) k , ( (cid:37) k +1 + H (cid:37) k ) + V ( (cid:37) k + H (cid:37) k ) = (cid:37) k +1 . As was visually demonstrated before, these identities may be understood as beginningwith a square grid of staircases (cid:37) (cid:98) n + i − jk (cid:99) and horizontally adding the rows, then verticallyadding the resulting sums. Of course, we could also first add vertically by column, andthen horizontally by row.This square grid of staircase partitions can also be added in an entirely different orderto produce the same result. Instead of adding all the staircases in each row together, wecan first add an i -by- i square, and then add a length- i column and a length i + 1 row toget an ( i + 1)-by-( i + 1) square. However, some slightly messy rearrangement of the piecesis needed, and the following proposition is the result. Proposition 5.5.
For any n, k we have:1. (cid:16) (cid:37) n + H (( k − V (cid:37) (cid:98) nk − (cid:99) ) (cid:17) + V (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99)− k +1+ ik (cid:23) (cid:33)(cid:33) = (cid:37) n + (cid:98) nk − (cid:99) , (cid:16) (cid:37) n + H (( k − V (cid:37) (cid:98) nk − (cid:99) ) (cid:17) + V (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99) +1+ ik (cid:23) (cid:33)(cid:33) = (cid:37) n + (cid:98) nk − (cid:99) +1 . k = 4 , n = 7, where the pieces combine tomake a (cid:37) .Using the above, we can construct a decomposition of any large staircase into a staircaseof about ( ik ) times its area and k − i staircases of about k times its area. This may bedone by repeatedly applying Theorem 5.5. Here is our main result, stated again.
Theorem 1.4.
For sufficiently large n , there exists λ (cid:96) n such that λ ⊗ contains allpartitions of n . For this overview, we will focus on the case of a triangular number n = m ( m +1)2 , usingfor λ the staircase (cid:37) m . The general case will augment most steps simply by adding theappropriate number of extra blocks in the form of a trivial representation. The detailedproof can be found in Section 6.The primary ingredient is the following theorem. Theorem 5.6.
For all m , (cid:37) ⊗ m ⊗ τ ⊗ O ( m ) contains all partitions of n = (cid:0) m +12 (cid:1) . (cid:37) ⊗ m ⊗ τ ⊗(cid:98) c √ n (cid:99) for an absoluteconstant c .We defined earlier H ( m ) to be the minimum number of tensor factors of τ n needed sothat (cid:37) ⊗ m ⊗ τ ⊗ H ( m ) n contains every partition, so Theorem 5.6 states that H ( m ) = O ( m ).For convenience, we define a maximal function for the function H . Definition 33.
For m ≥ M ( m ) = max { H (1) , H (2) , . . . , H ( m ) } . It is clear that M ( m ) = O ( m ) ⇐⇒ H ( m ) = O ( m ), and so we will show the former. Proof Outline for Theorem 5.6.
We use Proposition 5.5 to decompose (cid:37) m as (cid:37) m = ( (cid:37) x + H (3 V (cid:37) y )) + V (cid:32) Σ V i =0 (cid:37) z i (cid:33) , where up to O (1) error, we have x ≈ m , y ≈ z i ≈ m .Suppose that we have eight partitions λ . . . λ each contained in the tensor squareof one of the above staircases. Then repeatedly combining the partitions and using thesemigroup property gives us c (cid:32) (cid:37) m , (cid:37) m , Σ H i =0 ( λ i ) (cid:33) . So, we only need to show how to decompose an arbitrary partition µ as a horizontalsum of many smaller partitions comparable to staircases, using standard representationsto move a few blocks around as necessary. We will plan for all but two of these smallerpartitions to be dominance-comparable to the staircase. These will then be the only twosmaller partitions that need additional tweaking with standard representations.The idea is the following: we cut off a large chunk A of µ , of about its size. Theremainder λ (consisting of the larger end of the columns) is modified recursively to becontained in the tensor square of the staircase of size about n ; A itself is broken into7 pieces which can be modified to be suitable λ i . This breaking is done contiguously, sothat columns in a piece are either all at least as tall or all at least as short as columnsin a given other piece. If all columns in one of these pieces are either very tall or veryshort, we can conclude dominance comparability with the staircase. By further breakingdown the smaller pieces, we can ensure using this criterion that most of the small partsin our decomposition are dominance comparable to the corresponding staircases. Thesubadditivity of blockwise distance allows us to derive a recursion for M ( m ) from thisdecomposition, and we do not have recursive terms corresponding to most of these smallerpieces. The end result is the following recurrence.24 ( m ) ≤ M (cid:18) m O (1) (cid:19) + M (cid:16) m O (1) (cid:17) + cm + O (1) . Solving this recurrence by strong induction gives M ( m ) ≤ Cm for some constant C (see Section 6.2 for an explicit C which works for large n ). This provesthat M ( m ) = O ( m ), and hence Theorem 5.6.To conclude Theorem 1.4 it remains to show that for any c , for large enough n , anyirreducible representation in τ ⊗(cid:98) c √ n (cid:99) is contained in (cid:37) ⊗ m . In the next section, we explainour results in more detail and fill in this last step to complete the proof of Theorem 1.4even in non-triangular number cases. We follow the same plan as in the proof overview, giving more detail and showing theextensions to non-triangular values of n . First, we rigorize some of our earlier work withstaircase sum identities. Here is a result we used in the overview, which will now be proven.
Proposition 5.5.
For any n, k we have:1. (cid:16) (cid:37) n + H (( k − V (cid:37) (cid:98) nk − (cid:99) ) (cid:17) + V (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99)− k +1+ ik (cid:23) (cid:33)(cid:33) = (cid:37) n + (cid:98) nk − (cid:99) , (cid:16) (cid:37) n + H (( k − V (cid:37) (cid:98) nk − (cid:99) ) (cid:17) + V (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99) +1+ ik (cid:23) (cid:33)(cid:33) = (cid:37) n + (cid:98) nk − (cid:99) +1 . Proof.
We simply check each equality by evaluating the left sides. We begin with 1. Wehave (cid:37) n = ( n, n − , . . . k − V (cid:37) (cid:98) nk − (cid:99) = ( (cid:98) nk − (cid:99) , (cid:98) nk − (cid:99) , . . . ,
1) where eachdistinct value is repeated k − (cid:37) n + H (( k − V (cid:37) (cid:98) nk − (cid:99) ) (cid:17) = (cid:18) n + (cid:22) nk − (cid:23) , . . . , n + (cid:22) nk − (cid:23) − k + 2 , n + (cid:22) nk − (cid:23) − k, . . . (cid:19) where the omitted row lengths are precisely the values n + (cid:98) nk − (cid:99) + 1 − jk for positiveintegral j . Again using the fact the vertical addition is simply disjoint union of row lengths,it suffices to check that (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99)− k +1+ ik (cid:23) (cid:33)(cid:33) consists of precisely these row lengths. That the largest row lengths match follows fromHermite’s identity. Because the k numbers (cid:106) n + (cid:98) nk − (cid:99)− k +1+ ik (cid:107) differ pairwise by at most 1,the row lengths of (cid:32) k − Σ V i =0 (cid:32) (cid:37) (cid:22) n + (cid:98) nk − (cid:99)− k +1+ ik (cid:23) (cid:33)(cid:33) will decrease by k until reaching 0, which is exactly the correct behavior.The proof of 2. is identical, except now the omitted row lengths are those of the form n + (cid:106) nk − (cid:107) + 1 − jk for non-negative integers j .It is easy to see that the function f ( n ) = n + (cid:106) nk − (cid:107) attains all integer values exceptthose congruent to − k . Such values are attained by f ( n ) + 1, so Proposition 5.5suffices to break up any staircase into smaller pieces.We now use the above proposition to construct a decomposition of any large staircaseinto a staircase of about ( ik ) times its area and k − i staircases of about k its area, forany fixed i < k . Definition 34.
Define a k -layer decomposition of (cid:37) m as a decomposition of (cid:37) m into 2 k staircases which have sizes given by the left hand side of equation 1. of Proposition 5.5,where m = n + (cid:106) nk − (cid:107) , and which sum to (cid:37) m in the way indicated by that equation. Thepiece (cid:37) n in this decomposition is called the core .For 1 ≤ i ≤ k −
1, define a ( k, i ) -layer decomposition of (cid:37) m recursively as follows:A ( k, k − (cid:37) m is a k -layer decomposition of (cid:37) m .For i < k −
1, a ( k, i )-layer decomposition of (cid:37) m is the result of taking a ( k, i + 1)-layer decomposition of (cid:37) m , and further decomposing its core through a ( k − , i )-layerdecomposition.Thus, a ( k, i )-layer decomposition is a decomposition of (cid:37) m into k − i + 1 smallerstaircases. We extend the definition of the core to these decompositions and introduce arelated term for the remaining pieces. 26 efinition 35. We call the large part of a ( k, i )-layer decomposition of (cid:37) m the core andthe other k − i parts the flakes .It is clear that, for fixed ( i, k ), the flakes formed differ by only O (1) in length.In fact, as long as 2 i ≤ k , by using Proposition 5 . Definition 36.
We call a ( k, i )-layer decomposition of (cid:37) m into parts where all flakes pair-wise differ in length by at most 1 a smooth layer decomposition . Proposition 6.1.
For any ( m, k, i ) with i ≤ k , there is a smooth ( k, i ) -layer decompositionof (cid:37) m .Proof. We first examine based on the value of j to see which flake lengths can arise in a( k,
1) decomposition.If 0 ≤ j ≤ k − n = t ( k −
1) + j , which means the flake lengths are (cid:106) nk − (cid:107) = t and (cid:22) n + (cid:98) nk − (cid:99) − k +1+ ik (cid:23) = (cid:106) t ( k − j + t − k +1+ ik (cid:107) = (cid:106) ( t − k )+ j +1+ ik (cid:107) . Because i ≤ k − t − k + j + 1 + i < ( t + 1) k , so these numbers range over the set { t − , t } .If 1 ≤ j ≤ k − n = t ( k −
1) + j −
1, which makes the flake lengths (cid:106) nk − (cid:107) = t and (cid:22) n + (cid:98) nk − (cid:99) +1+ ik (cid:23) = (cid:106) t ( k − j − t +1+ ik (cid:107) = t + (cid:106) j + ik (cid:107) . This clearly ranges over { t, t + 1 } .Now, assume first that j < k − i . Then we may use 1. of Proposition 5.5 i times, havingafter (cid:96) iterations a core of length t ( k − (cid:96) ) + j and all flakes of lengths in { t − , t } . Because j < k − i , we always have j ≤ k − (cid:96) − (cid:96) ≤ k −
1, so we can complete the i iterationsin this way.If j ≥ k − i , then as 2 i ≤ k we have j ≥ i . Then we may instead use only part 2. ofthe proposition, which after (cid:96) iterations leaves a core of length t ( k − (cid:96) ) + j − (cid:96) . Because j ≥ i , this is strictly greater than t ( k − (cid:96) ) for all (cid:96) < i , so we can again run this process tocompletion, with all flakes of length in { t, t + 1 } . In either case, we are done. We now prove Theorem 5.6, restated with an explicit asymptotic constant.
Theorem 5.6 (with explicit constant) . For all m , (cid:37) ⊗ m ⊗ τ ⊗ (1184 m + O (1)) contains all par-titions of n = (cid:0) m +12 (cid:1) .Proof. We use Proposition 5.5 to write (cid:37) m as a ( k,
1) decomposition27 m = ( (cid:37) x + H (( k − V (cid:37) y )) + V (cid:32) k − Σ V i =0 (cid:37) z i (cid:33) , where up to O (1) error, we have x ≈ ( k − mk , y ≈ z i ≈ mk .Now, suppose that we have partitions λ . . . λ k − such that c ( (cid:37) x , (cid:37) x , λ ) ,c ( (cid:37) y , (cid:37) y , λ j )for 1 ≤ j ≤ k − c ( (cid:37) z i , (cid:37) z i , λ k + i )for 0 ≤ i ≤ k − c (cid:32) ( k − V (cid:37) y , ( k − V (cid:37) y , k − Σ H i =1 ( λ i ) (cid:33) , c k − Σ V j =0 ( (cid:37) z j ) , k − Σ V j =0 ( (cid:37) z j ) , k − Σ H i = k ( λ i ) , which in turn imply c (cid:32) (cid:37) x + H (( k − V (cid:37) y ) , ( (cid:37) x + H (( k − V (cid:37) y ) , k − Σ H i =0 ( λ i )) (cid:33) and so c (cid:32) (cid:37) x + H (( k − V (cid:37) y ) + V (cid:32) k − Σ V i =0 (cid:37) z i (cid:33) , (cid:37) x + H (( k − V (cid:37) y ) + V (cid:32) k − Σ V i =0 (cid:37) z i (cid:33) , k − Σ H i =0 ( λ i ) (cid:33) = ⇒ c (cid:32) (cid:37) m , (cid:37) m , k − Σ H i =0 ( λ i ) (cid:33) . Thus, we only need to show how to decompose an arbitrary partition as a horizontalsum of many smaller partitions.
Lemma 6.2.
Suppose λ (cid:96) n , where n = k ( k +1)2 , and the columns of λ have sizes c ≥ c ≥· · · ≥ c l . Then if either c l ≥ k or c ≤ (cid:4) k (cid:5) + 1 , we have c ( (cid:37) k , (cid:37) k , λ ) . roof. First consider the case where c l ≥ k . Then, for 1 ≤ j ≤ l , we have (cid:80) ji =1 c i ≥ kj ≥ k + ( k −
1) + · · · + ( k − ( j − j > l , (cid:80) ji =1 c i ≥ n . So, λ is clearlydominance-comparable to the staircase.Now consider the case where c ≤ (cid:4) k (cid:5) + 1. Now for any j < k , the leftmost j columnsof (cid:37) k have average size k +( k − j +1)2 ≥ k +22 , which is at least the average size of the first j columns of λ . For j ≥ k , the sizes of the leftmost j columns of (cid:37) k sum to n . So, λ and (cid:37) k are again comparable in the dominance order.Thus, in either case, c ( (cid:37) k , (cid:37) k , λ ) follows by Theorem 2.6.We now introduce a lemma which allows us to break up a partition µ with boundedheight into partitions contained in staircase tensor squares. The reader may wish to notethat only the first part of the lemma is needed to conclude that O ( √ n ) standard repre-sentations suffice to give every representation, by taking (e.g.) ( k, i ) = (7 ,
5) instead of( k, i ) = (4 , M ( m ) = max { H (1) , H (2) , . . . , H ( m ) } , where H ( m ) was the minimalnon-negative integer (cid:96) such that (cid:37) ⊗ m ⊗ τ ⊗ (cid:96)n contains all partitions of n as constituents. Lemma 6.3.
Consider a partition µ and staircases (cid:37) s , . . . , (cid:37) s r with s i ∈ { b − , b } forall i . Assume that the height of the largest column c satisfies c ≤ C and also ≤ ( (cid:80) i | (cid:37) s i | ) − | µ | ≤ C . Then there exists a partition (cid:98) µ such that c ( Σ H ri =1 (cid:37) s i , Σ H ri =1 (cid:37) s i , (cid:98) µ ) and also the generalized blockwise distance ∆ = ∆( (cid:98) µ, µ ) between (cid:98) µ and µ satisfies(1) ∆ ≤ (4 r − C + M ( b ) and(2) ∆ ≤ (4 r + 9) C + M ( (cid:100) b (cid:101) ) Proof.
We break up µ by its column lengths c ≥ c · · · ≥ c h . We will construct partitions ζ . . . ζ r which horizontally sum to (cid:98) µ ≈ µ . We begin by constructing partitions ζ ∗ i , andthen adjusting each ζ ∗ i to form ζ i .To form ζ ∗ i , we preliminarily use the greedy algorithm to assign to ζ ∗ as many columnsfrom µ as possible subject to | ζ ∗ | ≤ | (cid:37) s | starting from the smallest column c h , then continuesimilarly with ζ ∗ and so on through ζ ∗ r . Note that some columns of µ may be included innone of the ζ ∗ i . The assumptions c ≤ C and 0 ≤ (cid:80) i | (cid:37) s i | − | µ | ≤ C guarantee that for each i , we have | ζ ∗ i | ≤ | (cid:37) s i | ≤ | ζ ∗ i | + C. Now we modify each ζ ∗ i based on the height condition of Lemma 6.2. For each i set X i to be the height of the smallest column of ζ i and Y i to be the height of the largest. Clearlywe have X ≤ Y ≤ X · · · ≤ X r ≤ Y r . We leave alone those ζ ∗ i with Y i ≤ (cid:100) b (cid:101) - such ζ i caneasily be modified to satisfy dominance by Lemma 6.2 as we will see later.29e have some remaining set S of i such that Y i > (cid:100) b (cid:101) . Now, because we have X i +1 ≥ Y i ,at most 1 value of i ∈ S satisfies X i ≤ (cid:100) b (cid:101) , namely, only the minimal value i of S .Therefore, for all i ∈ S \{ i } , we have X i > (cid:100) b (cid:101) .For each i ∈ S \{ i } we break (cid:37) s i into a sum of 4 smaller staircases (cid:37) s i, . . . (cid:37) s i, bysetting k = 2 in Proposition 5.4. We also use the same greedy algorithm as before toextract from ζ ∗ i four partitions ζ ∗ i, . . . ζ ∗ i, such that | (cid:37) s i,j | − C ≤ | ζ ∗ i,j | ≤ | (cid:37) s i,j | and the ζ ∗ i,j use distinct columns of ζ ∗ i .Because s i ≤ b , we have s i,j ≤ (cid:100) b (cid:101) . For all i ∈ S \{ i + 0 } , X i > (cid:100) b (cid:101) , and the smallestcolumn in any ζ ∗ i,j is certainly at least as large as X i . This means that the ζ ∗ i,j now satisfythe other size condition in Lemma 6.2; by breaking down our partitions, we have preservedthe “tallness” of the ζ ∗ i in the ζ ∗ i,j while decreasing the total size, making them sufficiently“relatively tall” to apply Lemma 6.2.Now we finish part (1). We will add squares to the small staircases ζ ∗ i , ζ ∗ i,j whilepreserving the respective height conditions of Lemma 6.2. Let R be the set of partitionsconsisting of ζ i for i (cid:54)∈ S and ζ i,j for i ∈ S \{ i } . We know that for each partition in R , wecan bring its total size to the size of the corresponding staircase (cid:37) s i or (cid:37) s i,j by adding atmost C squares.We consider µ as the disjoint union of column-length multi-sets of the partitions in R , ζ ∗ i , and the remaining leftover columns. We make block-moves to bring each partition ζ ∗ i or ζ ∗ i,j in R to be of size s i or s i,j , while preserving the respective condition from Lemma 6.2.Note that because we have | ζ ∗ i | ≤ | (cid:37) s i | and | ζ ∗ i,j | ≤ | (cid:37) s i,j | , we can achieve the first goal withall block-moves being the addition of a new block.We show that we can add new blocks freely without destroying our height conditions.For i (cid:54)∈ S , we need to avoid increasing the height of ζ ∗ i , and we do this by adding additionalblocks only as new columns of length 1. The resulting modifications of ζ ∗ i our our desired ζ ∗ i . For i ∈ S \{ i } , we need to avoid decreasing the minimum height of the columns of ζ ∗ i,j ,and we do this by adding blocks only as rows of length 1, or equivalently increasing thelongest column length. The resulting modifications of ζ ∗ i,j are our ζ i,j .We clearly have | R | ≤ r −
4. Recall that we are consider µ as a disjoint union of thepartitions of R , as well as ζ ∗ i and some leftovers columns. We claim that by the aboveprocedure we can perform at most (4 r − C block moves on µ to modify all partitions ζ ∗ i , ζ ∗ i,j in R into ζ i , ζ i,j without affecting ζ ∗ i . This is simple: we repeatedly move a block fromthe leftovers columns onto ζ ∗ i or ζ ∗ i,j in the manner described above. If the leftover columnruns out, we instead create a brand new block to add to the partition being modified, sothis procedure carries out the desired function.Now we modify ζ ∗ i . We can add at most C blocks from the remainder of the leftovercolumns or out of nowhere to ζ ∗ i . This forms ζ ∗∗ i with | ζ ∗∗ i | = | (cid:37) s i | . By definition of M ,we can then modify at most M ( s i ) blocks in order to reach a partition ζ i appearing in30 ⊗ s i . Because we assumed 0 ≤ ( (cid:88) i | (cid:37) s i | ) − | µ | if we used a leftover-column block whenever possible, we are now completely out of leftovercolumn blocks, which means that our block moves have resulted in only the partitions ζ i , ζ i , ζ i,j .In all, we have made at most (4 r − C block moves each, as well as anadditional M ( s i ) ≤ M ( b ). We now let (cid:98) µ the partition formed from the horizontal sum ofthe resulting ζ i , ζ i,j , and ζ i . Because we assumed 0 ≤ (cid:80) i | (cid:37) s i |−| µ | , we must have consumedall of the leftover squares, so (cid:98) µ is precisely a horizontal sum of partitions comparable tothe staircases (cid:37) s i . Using the semigroup property first to combine the modifications of ζ i,j and then to combine all the ζ i , we have that (cid:98) µ is a constituent in the horizontal sum r Σ H i =1 (cid:37) s i . We made at most (4 r − C + M ( b ) block moves in transforming µ into (cid:98) µ . Thus, part (1)is proved.The modification to prove part (2) is simple: assuming X i ≤ (cid:100) b (cid:101) we go one stepfurther and split (cid:37) s i into 4 staircases, and split ζ i as with the other ζ i . We now repeatthe argument above just to these 4 parts ζ i ,j : some parts have a small maximum height,and of those that do not, at most 1 fails to have a sufficiently small maximum height afteranother level of subdivision. The modification of the expression in the conclusion resultsfrom the subdivision of the exception case i into smaller parts: there are an additional 12pieces which need C block moves each, but on the other hand the size of the exceptionalpiece is halved.We now get on with the main proof of Theorem 5.6. Fix an arbitrary partition µ (cid:96) n .Recall from Proposition 5.5 that we may write (cid:37) m as (cid:37) m = ( (cid:37) x + H (( k − V (cid:37) y )) + V (cid:32) k − Σ V i =0 (cid:37) z i (cid:33) where up to O (1) error, we have x ≈ ( k − mk , y ≈ z i ≈ mk . By Proposition 6.1 we may take y, z i pairwise differing by at most 1. The idea is to split µ into a large part corresponding to (cid:37) x and smaller parts corresponding to the other terms, then make some minor modificationsto each part via block moves and apply the semigroup property. We will set k = 4, so x ≈ m and y ≈ z i ≈ m We first split µ according to its Durfee square. Specifically, we may partition the blocksof µ into 3 pieces: the Durfee square D , the blocks T to the right of D , and the blocks31 below D . WLOG, we may assume | T | ≥ | T | , as otherwise we could conjugate µ andstart over; since the staircases are symmetric, this doesn’t affect anything. Let D have sidelength d .If the columns of µ are c ≥ c ≥ · · · ≥ c l , consider the smallest j such that (cid:80) ji =1 c i ≥| (cid:37) x | . Since | T | ≥ | T | , we have that (cid:80) (cid:98) d (cid:99) i =1 c i ≤ | D | + | T | ≤ n < | (cid:37) x | ≈ n . So j > d , andthus n ≥ jc j > d c j , yielding c j < nd .Furthermore, if d = | D | ≤ n , then j > d , so c j ≤ d ≤ √ √ n by definition of theDurfee square. So either c j ≤ √ √ n , or d > √ √ n and c j < nd < √ n. So, c j < √ n in either case.We now split off the region of the partition corresponding to the first j columns. Let theportion split off be λ , and let the remaining portion of the partition be A . By definitionof j , we have 0 ≤ (cid:18) | (cid:37) y | + (cid:80) i =0 | (cid:37) z i | (cid:19) − | A | ≤ c j ≤ √ n .To finish, we need to split A into smaller staircase-sized partitions in such a way thatmakes the total number of block movements needed small. To this end we apply Lemma 6.3with r = 2 k −
1, where s = · · · = s k − = y and s k + i = z i for 0 ≤ i ≤ k −
1. Here we set k = 4. Since the largest column involved has size µ (cid:48) < √ n , we can set C = 4 √ n andget c (cid:32) r Σ H i =1 (cid:37) s i , r Σ H i =1 (cid:37) s i , (cid:98) µ (cid:33) for some (cid:98) µ such that ∆( A, (cid:98) µ ) ≤ C + M ( (cid:100) y (cid:101) ) . Since | A | ≤ | (cid:98) µ | , µ = λ + H A can betransformed into λ ∗ + H (cid:98) µ by at most 37 C + M ( (cid:100) y (cid:101) ) block moves. Because blockwisedistance between equally sized partitions is simply the number of blocks which need to bemoved to go from one to the other, this means that τ ⊗ (37 C + M ( (cid:100) y (cid:101) )) n ⊗ ( λ ∗ + H (cid:98) µ )contains our original partition µ , where λ ∗ is some partition of size | (cid:37) x | .By definition, there is a partition λ ∗∗ of size | (cid:37) x | such that c ( (cid:37) x , (cid:37) x , λ ∗∗ ) and ∆( λ ∗ , λ ∗∗ ) ≤ M ( x ). So, by subadditivity of blockwise distance, as well as the layer decomposition for (cid:37) m , we have M ( m ) ≤ M ( x ) + 37 C + M (cid:16)(cid:108) y (cid:109)(cid:17) = M (cid:18) m O (1) (cid:19) + M (cid:16) m O (1) (cid:17) + 148 m + O (1) . as C = 4 √ n = 4 m + O (1). It is easy to see by strong induction that this recurrencegives M ( m ) ≤ (148 + o (1))(1 − − ) m = (1184 + o (1)) m. .3 Generalization to All n We now carry out the comparatively simple procedure of extending Theorem 5.6 to non-triangular values of n . Since staircases can only have triangular sizes, we make a simplemodification to the partitions (cid:37) m to give them the correct size. Definition 37.
The irregular staircase ξ n of size n = m ( m +1)2 + k , where 0 ≤ k ≤ m , isthe partition ξ n = (cid:37) m + H k .So, an irregular staircase is a staircase with a trivial representation horizontally addedto give it the desired total size. Note that ( m, k ) are uniquely determined by n in theabove. Corollary 6.4.
For all n , ξ ⊗ n ⊗ τ ⊗ (1185+ o (1)) √ n contains all partitions of n .Proof. We use Theorem 5.6. Given n , take the largest m such that (cid:0) m +12 (cid:1) ≤ n . Let k = n − (cid:0) m +12 (cid:1) , so k ≤ m . Then ξ n = (cid:37) m + H k . We show that an arbitrary partition µ of n may be transformed into a partition (cid:98) µ of n − k horizontally summed with 1 k , using m block-movements. As k ≤ m , it suffices to transform µ into a partition with at least m parts of length 1. But this is trivial: repeatedly remove a block from a column of lengthat least 2, and move the block to create a column of length 1. If we can make m suchmoves, we are done. If not, when we cannot make any more moves the current partition isa horizontal strip, and we are again done.Now that this is shown, we have that µ is contained in ( (cid:98) µ + H k ) ⊗ τ ⊗ mn . By Theo-rem 5.6 we have that (cid:98) µ is contained in (cid:37) ⊗ m ⊗ τ ⊗ (1184+ o (1)) mn − k . Thus, there exists ν such that c ( (cid:37) m , (cid:37) m , ν ) with blockwise distance ∆( ν, (cid:98) µ ) ≤ (1184 + o (1)) m . The semigroup propertygives c ( ξ n , ξ n , ν + 1 k ) . We have ∆( ν +1 k , (cid:98) µ +1 k ) ≤ (1184+ o (1)) m by Proposition 5.1. Since also ∆( (cid:98) µ +1 k , µ ) ≤ m we conclude that ∆( µ, ν + 1 k ) ≤ (1185 + o (1)) m . Since µ was arbitrary we have shownthat ξ ⊗ n ⊗ τ ⊗ (1185+ o (1)) m contains all partitions of n .From √ n = m + O (1) we conclude the desired result.33 .4 Replacing the Standard Representations Lemma 6.5.
For any (cid:96) > , for large n = m ( m +1)2 , if λ is an irreducible representation of S n such that λ is a component of τ ⊗(cid:98) (cid:96)m (cid:99) n , then λ is a component of (cid:37) ⊗ m .Proof. The irreducible components of τ ⊗(cid:98) (cid:96)m (cid:99) n are precisely those that correspond to parti-tions λ of n whose blockwise distance from 1 n is at most (cid:98) (cid:96)m (cid:99) .We use Proposition 5.5 with k = (cid:96) m for (cid:96) depending on (cid:96) . This decomposes (cid:37) m into a large piece of side length about m − m (cid:96) , as well as about 2 k − m (cid:96) and area m (cid:96) ) .Each column of λ has size at most (cid:96)m + 1, and there are at least n − (cid:96)m columns ofheight 1. So, as long as (cid:96) > (cid:96) , we can use each of the 2 k − k − c ≥ c ≥ . . . be the columnlengths, we have (2 k )( c k ) ≤ k (cid:80) i =1 c i ≤ (cid:96)m + 2 k = O ( m ) . Therefore, the remaining columnsare all of size O ( m ). By Lemma 6.2, we conclude that this remaining large part of λ isdominance comparable to the corresponding-size staircase. The semigroup property nowyields the lemma.We can also extend Lemma 6.5 to irregular staircases. Lemma 6.6.
For any (cid:96) > , for large enough n , if λ is an irreducible representation of S n such that λ is a component of τ ⊗(cid:98) (cid:96) √ n (cid:99) n , then λ is a component of ξ ⊗ n , the tensor square ofthe irregular staircase.Proof. The irreducible components of τ ⊗(cid:98) (cid:96) √ n (cid:99) n are precisely those that correspond to par-titions λ of n whose blockwise distance from 1 n is at most (cid:98) (cid:96) √ n (cid:99) . So, there are at least n − (cid:96) √ n columns of height 1. Again take the largest m such that (cid:0) m +12 (cid:1) ≤ n , and let k = n − (cid:0) m +12 (cid:1) . For large enough n , we can remove k columns of size 1 and leave a partition λ of n = (cid:0) m +12 (cid:1) that has distance at most (cid:96) √ n ≤ (1 . (cid:96) √ n from the trivial representa-tion of size n . By Lemma 6.5, for large enough n , all such λ are contained in (cid:37) ⊗ m , and soby the semigroup property, after adding the trivial representation back, we must have ξ ⊗ n contains λ .We can finally prove Theorem 1.4 on tensor fourth powers, with an explicit choice λ = ξ n . 34 heorem 1.4. For sufficiently large n , the tensor fourth power ξ ⊗ n contains all partitionsof n .Proof. Corollary 6.4 implies that for sufficiently large n , ξ ⊗ n ⊗ τ ⊗(cid:98) √ n (cid:99) n contains allpartitions of n . So for any µ (cid:96) n , ξ ⊗ n ⊗ ν contains µ for some ν contained in τ ⊗(cid:98) √ n (cid:99) n .Lemma 6.6 thus implies that µ is contained in ξ ⊗ n , as claimed. We have shown that staircase tensor squares (cid:37) ⊗ m contain almost all partitions in 2 naturalprobability distributions, and that there are partitions λ (cid:96) n for all large n such that λ ⊗ contains all partitions of n . Remark.
The argument used to show that tensor 4th powers contain every representationproceeds by showing first that every partition λ is near another partition contained in thetensor square (cid:37) ⊗ , and then shows that the nearness can be encompassed in another (cid:37) ⊗ factor. To prove the full tensor square conjecture using our semigroup methods, one wouldneed to remove all of the standard representations, which means each semigroup propertyapplication would need to be exactly correct. As a result, improving the exponent from 4to 2 seems more difficult. Intuitively, our value 4 is really 2 (cid:100) ε (cid:101) . Remark.
Our results in this paper focused on the staircase partitions, but many of thearguments can be adapted for partitions which can be similarly broken down into staircasepieces. For instance, the caret partition γ k mentioned in the introduction may be expressedas γ k = ( (cid:37) k + H (cid:37) k − ) + V (cid:37) k − . By breaking down the (cid:37) k into 4 pieces, we have broken γ k into 6 approximately equal staircases. As a result, the proof of Theorem 1.6 on Plancherel-random partitions applies to γ ⊗ k as well. Remark.
Intuitively, the rectangular Young diagrams should be the most difficult to dealwith using the semigroup property: if λ is rectangular and λ = λ + H λ , both λ and λ must be rectangular. Therefore, we have very little freedom in applying the semigroupproperty. It is, however, easy to show that rectangles are constituents of the tensor cubes (cid:37) ⊗ m ; see Appendix C.2 for details. Further exploration could yield more insight on howdifficult general partitions are to fit into a tensor cube. Remark.
The question of whether the semigroup property combined with combinatorialarguments involving dominance and symmetry suffices to prove the full tensor square con-jecture remains open. Regardless, checking for Kronecker coefficient positivity inductivelyusing the semigroup property seems much faster than directly computing Kronecker coeffi-cients. Of course, the semigroup property may fail to detect positive Kronecker coefficients.35 emark.
We have, using a computer to implement the semigroup property in conjunctionwith Theorem 2.6 on dominance ordering, verified the Saxl conjecture up to (cid:37) . These twofacts suffice for all cases except the 6 by 6 square in (cid:37) ⊗ . This case follows from thesemigroup property using the additional fact that c ( λ, λ, λ ) holds for every symmetricpartition λ ([2]), but our construction seems rather ad-hoc. We explain it here using somehelpful visuals. In the diagram (below), each color corresponds to an application of thesemigroup property, beginning with the red squares (which satisfy constituency by thetheorem mentioned above). Thus, the 3 depicted partitions yield a positive Kroneckercoefficient. A Triple with Positive Kronecker CoefficientWe now add the rectangle to itself, and add the other 2 partitions to each other, giving c ( (cid:37) , (cid:37) , µ ) for µ the 6 by 6 square. The authors thank their MIT SPUR 2015 mentor, Dongkwan Kim, for his valuable feedbackand advice. They also thank Nathan Harman for suggesting this problem, and professorsAlexei Borodin, David Jerison, and Ankur Moitra for their helpful input and advice. Theythank Mitchell Lee for help with editing, and finally thank the MIT SPUR program forthe opportunity to conduct this research project.
A Appendix: Technical Lemma on β -Sum Flexibility A.1 Overview of the Proof
We conclude by proving the key technical result used in the proof of Theorem 1.6. Thisresult is restated below: 36 heorem 4.2.
For any β > , let P ( n, β ) denote the probability that a (Plancherel)random partition of n is β -sum flexible. Then we have lim n →∞ P ( n, β ) = 1 for all β . We will estimate the typical value for the maximum height among the smallest n α columns, for < α ≤ . These bounds will enable us to conclude Theorem 4.2. We firstrecall from, e.g., [4] that, for a random partition of n , the number λ − λ of columns ofsize 1 is Θ( n / ) with high probability. In fact, λ − λ n / converges weakly to a non-triviallimiting density.This result tells us that for any (cid:15) there exists δ such that, for large n , we will have atleast δn / parts of size 1 with probability at least 1 − (cid:15) . This means that the condition inTheorem 4.2 is safe for all columns of size at most δn / . If the sum of all these columnsis of size Θ( n α ), the condition in Theorem 4.2 is now safe for all columns with at most cn α size, for some c. Our plan is to “bootstrap” in this manner up to Θ( n / ). If we canachieve the condition up to this point, we will have proved (4), because for pieces of sizeΘ( n / ) the limit shape easily implies Theorem 4.2.We actually estimate a closely related quantity, for which explicit formulae exist. Fora partition λ , we follow [4] in denoting by D ( λ ) the set { λ i − i } . We will also workwith poissonized Plancherel measure M θ instead of ordinary Plancherel measure M n . Atthe end, we will depoissonize to recover information about the measures M n . The use ofPoissonized Plancherel measure is that useful exact formulae describe the behavior of arandom partition. Definition 38.
For θ ∈ R + , the Poissonized Plancherel measure M θ is a probabilitydistribution over all partitions λ , with M θ ( λ ) = e − θ θ | λ | (cid:18) dim( λ ) | λ | ! (cid:19) . Conceptually, we pick a M θ partition by first taking n to be Poisson with mean θ , andthen picking a Plancherel-random λ (cid:96) n .The formulas to follow will allow us to estimate the mean and variance µ θ,w , σ θ,w of T ( λ, w ) = | D ( λ ) ∩ Z ≥ w | , where λ is taken from poissonized Plancherel measure with mean θ . Because the set D ( λ ) corresponds to the vertical border edges of the Young diagram,this approximately gives the height of the w th smallest column. By setting w = 2 √ n − Kn α , knowledge of these quantities will allow us to understand roughly the value of themaximum-size column among the first Θ( n α ).We now list the formulas and bounds we will use for these computations.37 .2 Formulas for Poissonized Plancherel Measure Definition 39.
The
Bessel function of the first kind J ν is defined as J ν ( x ) = ∞ (cid:88) m =0 ( − m ( x ) m + ν ( m !)Γ( m + ν + 1) . Definition 40.
The
Airy function Ai is defined as Ai ( x ) = 1 π (cid:90) ∞ cos (cid:18) u xu (cid:19) du. Definition 41.
For a partition λ = ( λ , λ , . . . , define D ( λ ) to be the set { λ i − i } . Definition 42.
Define the function J ( x, y ; θ ) as J ( x, y ; θ ) = ∞ (cid:88) s =1 J x + s (2 √ θ ) J y + s (2 √ θ ) . Lemma A.1 ([4, Thm 2, Prop 2.9]) . Let λ be chosen according to M θ . Then for anyfinite X = { x , x . . . x s } ⊆ Z , the probability that D ( λ ) ⊇ X is M θ ( { λ | D ( λ ) ⊇ X } ) = det [ J ( x i , x j , θ )] ≤ i,j ≤ s . Lemma A.2 ([11]) . J ν ( x ) ≤ x − / and J ν ( x ) ≤ ν − / . Lemma A.3 ([4, Lemma 4.4]) . For x ∈ R we have (cid:12)(cid:12)(cid:12) n / J n / + xn / (2 n / ) − Ai ( x ) (cid:12)(cid:12)(cid:12) = O (cid:16) n − / (cid:17) , n → ∞ where additionally the implicit constant in O ( n − / ) is uniform for x in any compact set. Lemma A.4 ([4, Prop 4.3]) . For fixed a ∈ R , lim n →∞ ∞ (cid:88) k =2 √ n + an J ( k, k ; n ) = (cid:90) ∞ t ( Ai ( a + t )) dt. Specializing to a = 0 gives Lemma A.5. lim n →∞ ∞ (cid:88) k =2 √ n J ( k, k ; n ) = (cid:90) ∞ t ( Ai ( t )) dt. emma A.6 ([4, Lemma 4.5]) . There exist C , C , C , ε > such that for sufficiently large n , for any A > , s > , we have (cid:12)(cid:12) J r + Ar / + s ( r ) (cid:12)(cid:12) ≤ C r − / exp (cid:16) − C ( A + sA r − / ) (cid:17) , s ≤ ε (cid:12)(cid:12) J r + Ar / + s ( r ) (cid:12)(cid:12) ≤ exp ( − C ( r + s )) , s ≥ ε for all r (cid:29) . Setting r = 2 n and A = 2 − in Lemma A.6, squaring, and adjusting the values C i asneeded, we have the following for all s , for some fixed values C i , ε , and large enough n . Lemma A.7.
There exist C , C , C , ε > such that for sufficiently large n , for any s > ,we have (cid:16) J n / + n / + s (2 n / ) (cid:17) ≤ C n − / exp (cid:16) − C (1 + sn − / ) (cid:17) , s ≤ εn / (cid:16) J n / + n / + s (2 n / ) (cid:17) ≤ exp (cid:16) − C (2 n / + s ) (cid:17) , s ≥ εn / . A.3 Proof of Theorem 4.2
We proceed as described in Section A.1. First, direct application of Lemma A.1 in the case | X | = 1, combined with linearity of expectation, yields that µ θ,w = ∞ (cid:88) y = w J ( y, y, θ ) = ∞ (cid:88) y = w ∞ (cid:88) s =1 ( J y + s (2 √ θ )) = ∞ (cid:88) r =1 r ( J w + r (2 √ θ )) , where the last equality is a simple sum rearrangement. Now let θ = n, w = 2 n / − Kn α , <α ≤ , as alluded to before. We will now establish the bounds on this expectation. Lemma A.8. µ n, (2 n / − Kn α ) = Ω (cid:16) n α − (cid:17) .Proof. Taking n sufficiently large, we have that µ n, (2 n / − Kn α ) = ∞ (cid:88) r =1 r ( J n / − Kn α + r (2 √ n )) ≥ Kn α + n / (cid:88) r = Kn α − n / r ( J n / − Kn α + r (2 √ n )) ≥ n / (cid:88) j = − n / (cid:18) Kn α (cid:19) ( J n / + j (2 √ n )) = (cid:18) Kn α (cid:19) n / (cid:88) j = − n / ( J n / + j (2 √ n )) . − ,
1] and is continuous, theuniformity in Lemma A.3 implies that each squared term in this last sum is of size Θ( n − / ).Since we have Θ( n / ) terms in the sum, we conclude that for large n, µ n, (2 n / − Kn α ) ≥ (cid:18) Kn α (cid:19) n / (cid:88) j = − n / ( J n / + j (2 √ n )) = Θ( n α − ) . Lemma A.9. µ n, (2 n / − Kn α ) = O (cid:16) n α − (cid:17) . Further, given fixed α , the implicit constantin O ( n α − ) is O ( K ) Proof.
Using Lemma A.5, we see that µ n, (2 n / − Kn α ) = ∞ (cid:88) y =2 n / − Kn α J ( y, y, n ) = n / (cid:88) y =2 n / − Kn α J ( y, y, n ) + ∞ (cid:88) y =2 n / J ( y, y, n )= n / (cid:88) y =2 n / − Kn α J ( y, y, n ) + O (1) . Expanding out the definition of J ( y, y, n ) we have that n / ) (cid:88) y =(2 n / − Kn α J ( y, y, n ) = n / (cid:88) y =(2 n / − Kn α ) ∞ (cid:88) s =1 ( J y + s +1 (2 n / )) = (cid:32) Kn α (cid:88) (cid:96) =1 (cid:96) ( J (cid:96) + s +1 (2 n / )) (cid:33) + Kn α ∞ (cid:88) m =2 n / ( J m (2 n / )) , where the last equality follows from another simple regrouping of terms. By Lemma A.2,for any value of (cid:96) we have J (cid:96) + s +1 (2 n / ) ≤ n − / . Thus, we have that the first sum is ofappropriate size: (cid:32) Kn α (cid:88) (cid:96) =1 (cid:96) ( J (cid:96) + s +1 (2 n / )) (cid:33) ≤ (cid:32) Kn α (cid:88) (cid:96) =1 (cid:96) (cid:33) n − / ≤ K n α − . To establish Lemma A.9, it remains to show that the latter term Kn α ∞ (cid:88) m =2 n / ( J m (2 n / ))
40s of appropriate size. First, note that from Lemma A.2 again, Kn α n / + n / (cid:88) m =2 n / ( J m (2 n / )) ≤ Kn α n / ( n − / ) = Kn α − = O ( n α − ) , where the last equality follows from the assumption α > . We are left with upper-bounding the sum Kn α ∞ (cid:88) m =2 n / + n / ( J m (2 n / )) . Now, using Lemma A.7, take suitable constants C , C , C , (cid:15) , and let n be large enough, sothat we have for all s( J n / + n / + s (2 n / )) ≤ C n − / exp( − C (1 + sn − / )) , s ≤ (cid:15)n / ( J n / + n / + s (2 n / )) ≤ exp( − C (2 n / + s )) , s ≥ (cid:15)n / . We claim that the above sum is also O ( n α − ), or equivalently that ∞ (cid:88) m =2 n / + n / ( J m (2 n / )) = (2+ (cid:15) ) n / + n / (cid:88) m =2 n / + n / ( J m (2 n / )) + ∞ (cid:88) m =(2+ (cid:15) ) n / + n / ( J m (2 n / )) = O ( n − / ) . For the first of these 2 sums, we have (2+ (cid:15) ) n / + n / (cid:88) m =2 n / + n / ( J m (2 n / )) = (cid:15)n / (cid:88) s =0 ( J n / + n / + s (2 n / )) ≤ (cid:15)n / (cid:88) s =0 C n − / exp( − C (1+ sn − / )) ≤ C n − / exp( − C ) ∞ (cid:88) s =0 (exp( − C ( n − / ))) s = C n − / exp( − C )1 − exp( − C n − / )= Θ( C n − / exp( − C ) C n − / ) = Θ( C exp( − C ) C n − / ) = O ( n − / ) . For the second, we have ∞ (cid:88) m =(2+ (cid:15) ) n / + n / ( J m (2 n / )) = ∞ (cid:88) s = (cid:15)n / ( J n / + n / + s (2 n / )) ≤ ∞ (cid:88) s = (cid:15)n / exp( − C (2 n / + s )) ≤ ∞ (cid:88) s =0 exp( − C (2 n / + s )) = exp( − C n / )1 − exp ( − C ) = exp(Θ( − n / )) = O ( n − / ) . Note that only the first part of our bounding affects the eventual constant in the O ( n α − ) of the lemma statement, because the other terms are O ( n α − ) = o ( n α − ). So,combining these separate bounds, we have established the lemma.41ow we verify that T ( n, n / − Kn α ) is concentrated around its mean. Lemma A.10. If λ is distributed according to poissonized M n , then T = T ( λ, n / − Kn α ) is concentrated near its mean µ = µ n, (2 n / − Kn α ) , in the sense that for all (cid:15) > , lim n →∞ P [ | T − µ | > (cid:15)µ | ] = 0 . Proof.
To show the lemma, we estimate the variance σ of T ( λ, n / − Kn α ). Perhapssurprisingly, this is very easy. We will show that σ = σ n, (2 n / − Kn α ) ≤ µ = µ n, (2 n / − Kn α ) , which suffices by the Chebyshev inequality, since µ grows to infinity with n by Lemma A.8.To show this, we note the following general fact (Lemma A.11): if X = ∞ (cid:80) j =0 x j is a finite-expectation sum of Bernoulli (0 or 1) random variables with pairwise non-positive covari-ances, then σ X ≤ µ X .In our case, this lemma easily implies the result: when λ is distributed according topoissonized M n we have T ( λ, n / − Kn α ) = ∞ (cid:88) w =2 n / − Kn α I ( w ∈ D ( λ )) , where the indicator variables I are 0 or 1 according to the truth value of their argument.We need only check that Cov ( I ( x ∈ D ( λ )) , I ( y ∈ D ( λ ))) ≤ x (cid:54) = y . We have Cov ( I ( x ∈ D ( λ )) , I ( y ∈ D ( λ ))) = P [ { x, y } ⊆ D ( λ )] − P [ x ∈ D ( λ )] P [ y ∈ D ( λ )] . Using Lemma A.1, this isdet (cid:18) J ( x, x ; n ) J ( x, y ; n ) J ( y, x ; n ) J ( y, y ; n ) (cid:19) − J ( x, x ; n ) J ( y, y ; n ) = − J ( x, y ; n ) J ( y, x ; n ) = − J ( x, y ; n ) ≤ , since J ( x, y ; n ) is symmetric in x and y . Therefore, the covariances are all negative, so itonly remains to verify Lemma A.11 Lemma A.11. X = ∞ (cid:80) j =0 x j is a finite-expectation sum of Bernoulli variables with pairwisenon-positive covariances, then σ X ≤ µ X Proof.
To prove this, we simply compute σ X = E [ X ] − E [ X ] . The condition on covariancesis equivalent to E [ x i x j ] ≤ E [ x i ] E [ x j ] for all i (cid:54) = j . Because E [ x ] = ∞ (cid:80) i =0 E [ x i ] is finite, itssquare is finite, so we have( E [ x ]) = (cid:32) ∞ (cid:88) i =0 ( E [ x i ]) (cid:33) + (cid:88) i>j ≥ E [ x i ] E [ x j ] < ∞ . E [ x i x j ] ≤ E [ x i ] E [ x j ], we find that E [ x ] + µ X = E [ x ] + E [ x ] = (cid:32) ∞ (cid:88) i =0 ( E [ x i ]) + E [ x i ] (cid:33) + (cid:88) i>j ≥ E [ x i ] E [ x j ] ≥ (cid:32) ∞ (cid:88) i =0 ( E [ x i ]) (cid:33) + (cid:88) i>j ≥ E [ x i x j ] = E [ x ] . We now know that the bounds in lemmas A.8, A.9 apply to almost all partitions λ , notonly “average” partitions: for some constants B ( K, α ) , B ( K, α ) where B = O ( K ) forfixed α , if λ is distributed according to M n we have B ( K, α ) n α − ≤ T ( λ, n / − Kn α ) ≤ B ( K, α ) n α − with probability 1 − o (1) as n → ∞ . We now depoissonize the result. The key observation,as described in [8], is that simple Plancherel measures M n may be viewed as samplings attime n of a growth process on partitions, in which we begin with an empty partition andadd additional blocks randomly at each step. It is clear that adding a block to λ cannotdecrease the value of T ( λ, w ). So because we can embed these Plancherel measures into agrowth process, the expectations are monotone: for (cid:96) < n we have P (cid:104) λ ( (cid:96) ) < B ( K, α ) n α − (cid:105) ≥ P (cid:104) λ ( n ) < B ( K, α ) n α − (cid:105) and for (cid:96) > n we have P (cid:104) λ ( (cid:96) ) > B ( K, α ) n α − (cid:105) ≥ P (cid:104) λ ( n ) > B ( K, α ) n α − (cid:105) . We claim that, for n large, the probability of a Poisson variable with mean n to be lessthan n converges to . Indeed, this is an immediate result of the Central Limit Theoremfor independent, identically distributed random variables applied to the Poisson randomvariable with mean 1. Therefore,lim sup n →∞ M n (cid:16)(cid:110) λ | T ( λ, n / − Kn α ) < B ( K, α ) n α − (cid:111)(cid:17) ≤ n →∞ M n (cid:16)(cid:110) λ | T ( λ, n / − Kn α ) < B ( K, α ) n α − (cid:111)(cid:17) = 0 , and similarly for the upper bound; a low probability of deviation for poissonized Plancherelmeasure directly implies the same bound (up to a factor of 2) for standard Plancherelmeasure. So, we have B ( K, α ) n α − ≤ T ( λ, n / − Kn α ) ≤ B ( K, α ) n α − − o (1) for λ distributed according to M n measure.Now we establish the connection between T ( λ, n / − Kn α ) and the column sizes. Itwill be valuable to visualize a Young diagram (in English coordinates) as a block-walkbeginning from the far-right of the top border line, with coordinates ( n, λ i − i ) and every left-move corresponds to the endof a column. With this in mind, T ( λ, n / − Kn α ) is essentially the height of the largestcolumn so far after n − n / + Kn α moves from the starting point. The idea is that ourupper bound lets us move far out without fearing that our largest column is too big. Ourlower bound tells us that many of the columns we have formed are large, implying that thesum of their sizes is large. The only caveat is that if we are moving vertically, T measuresthe current vertical distance moved, but may not exactly measure a column height; we maybe in the middle of a column. However, we will circumvent this issue by using multiplevalues of K simultaneously, and showing that in between there must be many horizontalmoves.Pick an (cid:15) >
0. We will give a bootstrapping argument that gives result (4) for parameter β with probability 1 − (cid:15) . First, because the number of size-1 columns λ − λ satisfies λ − λ n / converges in distribution to a limiting density, there exists δ such that (for large n )with probability at least 1 − (cid:15) , there are at least δ β n / parts of size 1. Using lemmas A.8and A.9, since − = and − = , we may pick δ , and then δ , such that for large n , with probability at least 1 − (cid:15) , T ( λ, n / − δ n / ) ≥ δ n and T ( λ, n / − δ n / ) ≤ δ n (for the second inequality, we use the fact that the constant factor in Lemma A.9 is O ( K )).Now, for large n , we have δ n / − δ n ≥ δ n / . This implies that D ( λ ) cannot containmore than δ n / of the interval of integers [2 n / − δ n / , n / − δ n / ] as this wouldviolate the latter inequality above. In view of our block-walking model, this is equivalentto at least δ n / horizontal move being made in between the steps n − n / + δ n / , n − n / + 2 δ n / meaning that at least δ n / columns in this range have height at least δ n (this lower bound on the height comes from the first inequality above). Because thelargest column among these is smaller than δ n , the condition in result (4) holds for allcolumns up to size δ n . Therefore, by bootstrapping, it holds for all columns with size atmost β times the sum of their sizes, or at most β (cid:18) δ n (cid:19) (cid:16) δ n (cid:17) = βδ δ n = δ n . Now we repeat the argument once more. Again using Lemma A.9, we pick δ such that44ith probability 1 − (cid:15) , T (cid:16) λ, n − δ n (cid:17) ≤ δ n and δ ≤ δ . The first is possible because the constant in Lemma A.9 is O ( K ), implying that it is o ( K )for K small. Given δ we can then pick δ such that with probability 1 − (cid:15) , T ( λ, n / − δ n ) ≥ δ n / . The first inequality above implies that of the δ n / block moves from step n − n / + δ n / to step n − n / + 2 δ n / , we may make at most δ n / vertical moves. Hence,we make at least δ n / horizontal moves in this span. By the definition of δ , we make acolumn of size at least δ n each time, giving a total column size sum of δ δ n / .Now we are almost done. Our bootstrapping has shown Theorem 4.2 for columns ofsize at most βδ δ = δ n / with probability at least 1 − (cid:15) . Bootstrapping once more, wecan now sum up all the column lengths of size at most δ n / . At scale Θ( n / ), we cansimply apply the limit shape theorem to conclude that for almost all partitions, the sum ofsuch column lengths is Θ( n ), a constant fraction of the total size, because we are collectinga positive-area amount of the limit shape. Because almost all partitions have all columnsof size O ( √ n ), we have bootstrapped to completion with error probability 6 (cid:15) and so theresult is proved. B Appendix: Generalization of Dominance Result
We extensively used Theorem 2.6 stating that if (cid:37) m , λ are dominance comparable, then c ( (cid:37) m , (cid:37) m , λ ). We present a generalization. Theorem B.1.
For partitions µ, ν (cid:96) n , if µ has distinct row lengths and ν (cid:23) µ , then c ( µ, µ, ν ) . This is strictly more general than Theorem 2.6 because in the case when µ = (cid:37) m wehave c ( µ, µ, ν ) for all ν (cid:23) µ . Since µ = µ (cid:48) , conjugating gives c ( µ, µ, ν (cid:48) ) as well, and so wehave recovered both cases of Theorem 2.6. Proof.
The proof of this generalization requires only slight modification of the proof of theoriginal result in [10]. We quote from there the following definition.
Definition 43.
Let d = | λ | = | µ | = | ν | . A Young hypergraph
H of type ( λ, µ, ν ), is ahypergraph with d vertices such that1. The are three layers of hyperedges E λ , E µ , E ν .45. Each of E λ , E µ , E ν contains each vertex in exactly 1 hyperedge.3. There is a bijection between the vertices of H and the boxes of λ such that 2 verticeslie in a common hyperedge of E λ iff the corresponding boxes in λ lie in the samecolumn. Analogously for E µ and µ and for E ν and ν .Given a Young hypergraph of type ( λ, µ, ν ), we consider the ways to label its verticeswith positive integers. In particular, we have the following definitions. Definition 44.
We call a labelling of the vertex set of a Young hypergraph of type ( λ, µ, ν ) λ - permuting if for each hyperedge e λ of λ with | e λ | = k , the labels of the vertices of e λ area permutation of { , , . . . , k } . We define µ -permuting and ν -permuting analogously. Definition 45.
We call a labelling of the vertex set of a Young hypergraph of type ( λ, µ, ν ) λ - distinct if for each hyperedge e λ of λ with | e λ | = k , the labels of the vertices of e λ consistof distinct numbers. We define µ -distinct and ν -distinct analogously. Definition 46.
We call a labelling of the vertex set of a ( λ, µ, ν ) Young hypergraph H perfect if it is λ -permuting, µ -permuting, and ν -distinct.Then the finish of the proof of [10][2.1] (see Section 5) amounts to the following lemma. Lemma B.2.
If there exists a Young hypergraph such that there is exactly 1 perfect labellingof its vertices then c ( λ, µ, ν ) holds. The proof in [10] uses as Young hypergraphs for (cid:37) m the rows and columns of the Ferrersdiagram of (cid:37) m , which works because (cid:37) m is symmetric. We now adapt this to an arbitrary µ with distinct row lengths; for any ν (cid:23) µ we show there exists a perfect Young hypergraph oftype ( µ, µ, ν ). We use the Ferrers diagram for µ as the vertex set. For the first hypergraph H corresponding to µ , we simply take the columns of µ as the hyperedges. Note thatin English coordinates this amounts to, for each hyperedge, greedily taking the left-mostvertex in each available row. For our second hypergraph H corresponding to µ we greedilytake vertices from the right of each row instead. For example, when µ = (cid:37) m this amountsto taking the diagonals as hyperedges. It is easy to see that H is also a hypergraph oftype µ . We claim that H and H already limit the possible vertex labellings to a uniqueone, namely the labelling which assigns to each vertex its row number. Lemma B.3.
There is only one vertex labelling of the Ferrers diagram of µ which is µ -permuting with respect to both H and H . This labelling assigns to each vertex its rownumber.Proof. We first make an easy observation. For any vertex v of our hypergraph, the hyper-edge of H containing v also contains a vertex in each higher row, all of which are strictlyto the right of v . This is clear because the rows of µ have distinct lengths.46ow we inductively show that any such labelling must just be the row numbering. Weinduct on columns, starting from the furthest right. Clearly the rightmost column’s uniquevertex is labelled 1, because it is contained in a size 1 hyperedge of H .For each subsequent column, assume that all columns to the right are labelled by rownumbers. We do a further induction within the column, starting from the bottom vertex.Call the column under consideration C , and say its vertices are v in row 1, and so on to v k in row k. For some vertex v i in C , assume that v j is labelled j for all j > i ; we show v i is labelled as i .To do so, note that by our initial observation, v i must be labelled at least i ; its hyperedgein H contains labels from 1 to ( i −
1) already. However, its hyperedge in H is simply C ,which already contains all labels greater than i . Therefore the only choice for v i is to belabelled i .This completes the induction. Clearly the described labelling indeed satisfies the givenconditions, so the lemma is proved.Now the combination of the above lemmas implies that if we can find a perfect ( µ, µ, ν )Young hypergraph for the above vertex labelling extending H , H above, then we have c ( µ, µ, ν ). In fact, we can do so for precisely those ν which dominate µ . It is clear that tofind a E ν hypergraph which yields a perfect Young hypergraph, it is equivalent to find afilling of ν with content µ such that each column has distinct entries. (By such a filling of ν with content µ we mean a labelling of the boxes of ν such that the number of k labels isthe size µ k of the k th row of µ .) By [10][4.1], the existence of such a filling is equivalent to ν (cid:23) µ , so the proof of Theorem B.1 is complete. C Appendix: The Generalized Semigroup Property and Ten-sor Cubes
C.1 The Semigroup Property for Many Partitions
We first generalize c ( · , · , · ) to longer sequences of representations. Definition 47.
For k a positive integer let c ( λ , λ , . . . , λ k )denote the assertion that λ is a constituent of λ ⊗ λ ⊗ . . . λ k . As in the k = 3 case, c is symmetric because it simply asserts the positivity of1 n ! (cid:88) σ ∈ S n χ λ ( σ ) χ λ ( σ ) . . . χ λ k ( σ ) . We now show that the semigroup property still applies for longer sequences using induction.47 emma C.1. If c ( λ , . . . λ k ) and c ( µ , . . . µ k ) then also c ( λ + H µ , . . . , λ k + H µ k ) . Proof.
First, the result is trivially true for k ≤ c ( λ ) = 1 ⇐⇒ λ is trivial and c ( λ , λ ) ⇐⇒ λ = λ . For k ≥ k > k − c ( λ , . . . λ k ) there exists a partition α such that c ( λ , λ , . . . λ ( k − , α ) and c ( λ ( k − , λ k , α ). Similarly there exists β such that c ( µ , . . . , µ ( k − , β ) and c ( µ ( k − , µ k , β ).We now conclude using the inductive hypothesis that c ( λ + H µ , . . . , λ ( k − + H µ ( k − , α + H β )and c ( λ ( k − + H µ ( k − , λ k + H µ k , α + H β ) . Together these imply c ( λ + H µ , . . . , λ k + H µ k ) as desired.As in the k = 3 case, we may vertically add any even number of partitions in applyingthe semigroup property: this is because conjugating an even number of the partitions doesnot change the truth of c ( λ , . . . λ k ). C.2 Rectangles Appear in (cid:37) ⊗ m As mentioned in the remarks, rectangles are difficult to control using the semigroup prop-erty because they can only be broken up into smaller rectangles. This suggests that rect-angles should be the hardest case for the Saxl conjecture. In this section, we show thatdespite this, rectangles appear in the tensor cube of the staircase. In fact, the proof is afairly simple induction.
Definition 48.
The rectangle partition R ( a, b ) is the rectangular Young diagram ( a, a, . . . a )with b rows. Theorem C.2.
Any rectangular partition λ = R ( a, b ) of size (cid:0) m +12 (cid:1) is a constituent in thetensor cube (cid:37) ⊗ m . Equivalently, if ab = (cid:0) m +12 (cid:1) then c ( (cid:37) m , (cid:37) m , (cid:37) m , R ( a, b )) .Proof. We induct on m . Assume WLOG that a ≥ b . If a ≥ m then b ≤ m +12 and byLemma 6.2, λ and (cid:37) m are dominance comparable. Thus, the result follows in this case.Thus, we may assume that m +12 < a < m . Then we have that (2 a − m ) and (2 m − a − Lemma C.3. b + 2 a − m − ≥ and a ( b + 2 a − m −
1) = | (cid:37) a − m − | .Proof. ≤ (2 a − m )(2 a − m − = m + m +2 a − am − a = ab +2 a − am − a = a ( b +2 a − m − a ≥ b + 2 a − m − ≥ emma C.4. If µ = R (2 a − m, m − a − then c ( µ, µ, µ, µ ) . Proof.
In fact this holds for any µ at all. We have k ( µ, µ, µ, µ ) = (cid:104) µ ⊗ , µ ⊗ (cid:105) > Lemma C.5. c ( (cid:37) (2 m − a ) , (cid:37) (2 m − a ) , (cid:37) (2 m − a ) , R ( m − a, m − a + 1)) . Proof.
This follows from the inductive hypothesis because the total number of blocks inthe rectangle and staircases are clearly equal.
Lemma C.6. c (cid:0) (cid:37) (2 a − m − , (cid:37) (2 a − m − , (cid:37) (2 a − m − , R ( a, b + 2 a − m − (cid:1) . Proof.
This follows from the inductive hypothesis and Lemma C.3.Now note that( R (2 a − m, m − a + 1) + H R ( m − a, m − a + 1)) + V R ( a, b + 2 a − m + 1) = R ( a, m − a −
1) + V R ( a, b + 2 a − m −
1) = R ( a, b )while ( R (2 a − m, m − a + 1) + H (cid:37) (2 m − a ) ) + V (cid:37) (2 a − m − = (cid:37) m . This latter identity is a rewriting of the geometrically obvious( R ( x, y ) + H (cid:37) y ) + V (cid:37) x = (cid:37) ( x + y ) . Theorem C.1 applied to these lemmas and identities gives the result; because 4 is even,it is permissible to vertically add all 4 partitions when using the semigroup property.Geometrically, we are simply combining shapes as below.49 Geometric Proof that Rectangles are Contained in (cid:37) ⊗ m References [1]
Baik, J., Deift, P., and Johansson, K.
On the distribution of the length of thelongest increasing subsequence of random permutations.
J. Amer. Math. Soc. 12 , 4(1999), 1119–1178.[2]
Bessenrodt, C., and Behns, C.
On the Durfee size of Kronecker products ofcharacters of the symmetric group and its double covers.
J. Algebra 280 , 1 (2004),132–144.[3]
Billingsley, P.
Probability and Measure , vol. 939. John Wiley & Sons, 2012.[4]
Borodin, A., Okounkov, A., and Olshanski, G.
Asymptotics of Plancherel mea-sures for symmetric groups.
J. Amer. Math. Soc. 13 , 3 (2000), 481–515 (electronic).[5]
B¨urgisser, P., and Ikenmeyer, C.
The complexity of computing Kronecker coeffi-cients. In , Discrete Math. Theor. Comput. Sci. Proc., AJ. Assoc.Discrete Math. Theor. Comput. Sci., Nancy, 2008, pp. 357–368.[6]
Christandl, M., Harrow, A. W., and Mitchison, G.
Nonzero kronecker coeffi-cients and what they tell us about spectra.
Communications in mathematical physics270 , 3 (2007), 575–585.[7]
Fristedt, B.
The structure of random partitions of large integers.
Trans. Amer.Math. Soc. 337 , 2 (1993), 703–735. 508]
Fulman, J.
Stein’s method and Plancherel measure of the symmetric group.
Trans.Amer. Math. Soc. 357 , 2 (2005), 555–570.[9]
Fulton, W., and Harris, J.
Representation theory , vol. 129. Springer Science &Business Media, 1991.[10]
Ikenmeyer, C.
The Saxl conjecture and the dominance order.
Discrete Math. 338 ,11 (2015), 1970–1975.[11]
Krasikov, I.
Uniform bounds for Bessel functions.
J. Appl. Anal. 12 , 1 (2006),83–91.[12]
Pak, I., Panova, G., and Vallejo, E.
Kronecker products, characters, partitions,and the tensor square conjectures. arXiv preprint arXiv:1304.0738 (2013).[13]
Regev, A.
Kronecker multiplicities in the ( k, (cid:96) ) hook are polynomially bounded.
Israel J. Math. 200 , 1 (2014), 39–48.[14]
Stanley, R. P.
Enumerative combinatorics. Vol. 2 , vol. 62 of
Cambridge Studiesin Advanced Mathematics . Cambridge University Press, Cambridge, 1999. With aforeword by Gian-Carlo Rota and appendix 1 by Sergey Fomin.[15]
Vershik, A. M.
Statistical mechanics of combinatorial partitions, and their limitconfigurations.
Funktsional. Anal. i Prilozhen. 30 , 2 (1996), 19–39, 96.[16]
Verˇsik, A. M., and Kerov, S. V.
Asymptotic behavior of the Plancherel measureof the symmetric group and the limit form of Young tableaux.