Longest increasing subsequences, Plancherel-type measure and the Hecke insertion algorithm
aa r X i v : . [ m a t h . C O ] A p r LONGEST INCREASING SUBSEQUENCES, PLANCHEREL-TYPE MEASUREAND THE HECKE INSERTION ALGORITHM
HUGH THOMAS AND ALEXANDER YONGA
BSTRACT . We define and study the
Plancherel-Hecke probability measure on Young dia-grams; the
Hecke algorithm of [Buch-Kresch-Shimozono-Tamvakis-Yong ’06] is interpretedas a polynomial-time exact sampling algorithm for this measure. Using the results of[Thomas-Yong ’07] on jeu de taquin for increasing tableaux , a symmetry property of the Heckealgorithm is proved, in terms of longest strictly increasing/decreasing subsequences ofwords. This parallels classical theorems of [Schensted ’61] and of [Knuth ’70], respectively,on the
Schensted and
Robinson-Schensted-Knuth algorithms. We investigate, and conjectureabout, the limit typical shape of the measure, in analogy with work of [Vershik-Kerov ’77],[Logan-Shepp ’77] and others on the “longest increasing subsequence problem” for per-mutations. We also include a related extension of [Aldous-Diaconis ’99] on patience sorting .Together, these results provide a new rationale for the study of increasing tableau combina-torics, distinct from the original algebraic-geometric ones concerning K -theoretic Schubertcalculus. C ONTENTS
1. Introduction and main results 21.1. Overview 21.2. Plancherel-Hecke measure 31.3. Remarks on Proposition 1.2 and Theorem 1.3 51.4. Analysis of µ n,q and the limit typical shape 61.5. Further comparisons with the literature 91.6. Summary and organization 102. The Hecke algorithm 102.1. The -Hecke monoid 102.2. Description of Hecke and
Heckeshape
Date : March 29, 2008. .2. Plancherel- RSK as a Markov measure 286.3. Conclusion of Proof of Theorem 6.1 29Acknowledgments 31References 321. I
NTRODUCTION AND MAIN RESULTS
Overview.
Let W n,q denote the set of words of length n generated using the alphabet {
1, 2, . . . , q } . Let LIS ( w ) denote the length of the longest strictly increasing subsequence of w = w w · · · w n , i.e., the largest ℓ with a subsequence i < i < . . . < i ℓ such that w i < w i < . . . < w i ℓ . Similarly, we consider the length of the longest strictly decreasingsubsequence LDS ( w ) of w . Our main goal is to introduce and study a discrete probabilitymeasure on Young diagrams, in connection with the study of the distributions of LIS and
LDS on uniform random words. An additional goal is to provide a novel motivation forthe K -theoretic Schubert calculus combinatorics of [BuKrShTaYo06, ThYo07].There are analogies with the study of LIS and
LDS in the permutation case , i.e., when w is chosen uniformly at random from the symmetric group S n . The latter topic hasattracted considerable attention; we refer the reader to the surveys [AlDi99, St06] andthe references therein. In the permutation case, random Young diagrams are distributedaccording to the Plancherel measure (on irreducible representations) of S n . This discreteprobability measure is the push-forward of the uniform distribution on S n , under the Robinson-Schensted correspondence . Schensted [Sc61] established that this correspondenceencodes
LIS ( w ) and LDS ( w ) symmetrically in the shape λ associated to w . In [VeKe77,LoSh77], these ideas are applied to determine the asymptotics of the expectation of LIS over S n (solving the old “longest increasing sequences problem”), via a study of the “limittypical shape” under the Plancherel measure.As a continuation of this theme, we apply the Hecke (insertion) algorithm of[BuKrShTaYo06] to define the Young diagram
Heckeshape ( w ) for each w ∈ W n,q ; us-ing this we define Plancherel-Hecke measure . Our belief that this measure should actuallybe worthy of analysis was initially guided by our theorem that
Hecke symmetrically en-codes
LIS ( w ) and LDS ( w ) for w ∈ W n,q , a generalization of Schensted’s theorem. Duringthe course of our investigation, we found that many other aspects of the Plancherel-Heckemeasure (conjecturally) also resemble those of the Plancherel measure. This paper recordsthese results, both theoretical and computational, as a justification for further study.Briefly, this is how the two aforementioned measures compare: Let q = Θ ( n α ) . Sending q, n → ∞ , we conjecture that for α > , our measure is concentrated around the limittypical shape under Plancherel measure. This Plancherel curve plays an important role in[VeKe77, LoSh77]. On the other hand, for α < we conjecture the measure is concentratednear the “staircase shape”. In particular, a “phase transition” is suggested at α = .As we tune α , a symmetric deformation of the Plancherel curve occurs. In view of theabove mentioned result on the Hecke algorithm, this transition phenomenon is furtherevidenced by computations (with contributions by O. Zeitouni) of the expectation of
LIS and
LDS as α varies; see Section 5 and the Appendix. ∈ INC ((
4, 2, 1 ) , 5 ) and
1, 2 4 5, 7, 9 103, 6 118 ∈ SsetT ((
4, 2, 1 ) , 11 ) . F IGURE
1. An increasing tableau and a set-valued standard Young tableauThere have been earlier extensions of the permutation case to W n,q . The limit distribu-tion of the length of the longest weakly increasing/decreasing subsequence ( LwIS / LwDS )on W n,q was found in work of [TrWi01], following the breakthrough [BaDeJo99] on thelimit distribution of LIS on S n . See also the more recent work [HoLi06]. However, analo-gous understanding of the distribution of LIS and
LDS on W n,q appears to be less devel-oped; see, e.g., [Bi01, BoOl07, TrWi01] for contributions.As a point of comparison and contrast with our approach, previous work on LIS , LDS and W n,q utilizes the combinatorics of the Robinson-Schensted-Knuth correspondence , whichasymmetrically encodes
LwIS and
LDS . We offer an alternative viewpoint on the relation-ship between Young diagrams and
LIS , LDS . New questions and conjectures are raised,stemming from the Coxeter-theoretic viewpoint of [BuKrShTaYo06] (which in turn gener-alizes ideas of [EdGr87]).This text expresses our desire to point out a natural link between the probabilistic com-binatorics of
LIS , LDS and the combinatorial algebraic geometry of K -theoretic Schubertcalculus. In particular, we apply and further develop the jeu de taquin for increasingtableaux from [ThYo07], thereby giving another perspective on that work, distinct fromthe original one. In summary, we believe that the availability of these two disparate in-terpretations for [BuKrShTaYo06, ThYo07] provides something atypical to recommend K -theoretic tableau combinatorics, among the large array of interesting generalizations ofthe classical Young tableau and symmetric function theories known today.1.2. Plancherel-Hecke measure.
We identify a partition λ = ( λ ≥ λ ≥ . . . ≥ λ k > 0 ) with its Young diagram (in English notation); set | λ | := P i λ i . Let Y denote the set ofall Young diagrams. A filling of a shape λ with a subset of the labels {
1, 2, . . . , q } is an increasing tableau if it is strictly increasing in both rows and columns. Let INC ( λ, q ) bethe set of all increasing tableaux of shape λ .We also need set-valued tableaux [Bu02a], which are fillings of λ assigning to each boxa nonempty subset of {
1, 2, . . . , n } such that the largest entry of a box is smaller than thesmallest entry in the boxes directly to the right of it, and directly below it. We call a set-valued tableau standard if each label is used precisely once. Let SsetT ( λ, n ) denote theset of all standard set-valued tableaux. See Figure 1.The Plancherel measure on Y assigns to λ the probability ( f λ ) n ! , where f λ := e λ ( | λ | ) is the number of standard Young tableaux of shape λ .Let d λ ( q ) := INC ( λ, q ) and e λ ( n ) = SsetT ( λ, n ) . efinition 1.1. The
Plancherel-Hecke probability measure µ n,q on Y is defined by letting λ n,q be a random (non-uniform) Young diagram with distribution Prob ( λ n,q = λ ) := n d λ ( q ) e λ ( n ) . Proposition 1.2.
The Plancherel-Hecke measure is well-defined as a probability distribution; i.e.,the following identity holds: (1) q n = X λ d λ ( q ) e λ ( n ) , where (2) | λ | ≤ min (cid:18) n, (cid:18) q + (cid:19)(cid:19) and λ ⊆ ( q, q −
1, q −
2, . . . , 3, 2, 1 ) . There is an exact polynomial-time sampling algorithm
Heckeshape : W n,q → Y , terminating in O ( nq ) operations, that induces µ n,q from the uniform distribution on W n,q . The core technical result of this paper is a generalization of the aforementioned theoremof Schensted [Sc61]:
Theorem 1.3.
Heckeshape simultaneously and symmetrically encodes
LIS ( w ) and LDS ( w ) asthe size of the first row and column of Heckeshape ( w ) , respectively. Theorem 1.3 is obtained by establishing another new result, connecting
Heckeshape tothe “ K -infusion” operation defined in [ThYo07]. This latter result is an analogue of theclassical fact that connects the Robinson-Schensted correspondence to the (ordinary) jeude taquin rectification procedure.We prove of Proposition 1.2 in Section 2, after recalling the Hecke algorithm of Buch-Kresch-Shimozono-Tamvakis-Yong [BuKrShTaYo06] (originally constructed to study de-generacy loci of vector bundles).
Heckeshape ( w ) is the Young diagram associated to w under Hecke . The proof of Theorem 1.3 is given in Section 4.
Example . We illustrate the identity (1) for n = and q = . There are nine partitions λ satisfying (2). These are ( ) , ( ) , (
1, 1 ) , (
2, 1 ) , ( ) , (
1, 1, 1 ) , (
3, 1 ) , (
2, 1, 1 ) , (
2, 2 ) . Then (1) reads = = · + · + · + · + · + · + · + · + · where the products on the righthand side of the equality are listed in order correspondingto the above partitions. Thus, the “typical shape” is (
2, 1 ) , possessing nearly % of thedistribution. The remainder of the above results will be illustrated in Section 2, after wedefine Heckeshape .Theorem 1.3 has some immediate consequences, familiar from the permutation case. orollary 1.5. Under the uniform measure on W n,q , we have (3) E ( LIS ) = n X λ λ d λ ( q ) e λ ( n ) . In addition,
Prob ( LIS = ℓ ) = n X λ,λ = ℓ d λ ( q ) e λ ( n ) . Two other consequences will be given in Section 3. One gives a “Coxeter-theoretic”generalization of the widely known Erd˝os-Szekeres theorem [ErSz35]. Another expandsupon the discussion of patience sorting given in [AlDi99].1.3.
Remarks on Proposition 1.2 and Theorem 1.3.
In our experiments,
Heckeshape wasreasonably efficient as a sampling algorithm. For example, when n ≤
10, 000 samplingone Young diagram takes on the order of seconds to minutes on current technology. Forlarger n , we could sample one Young diagram when n =
50, 000 in several hours on thesame technology. A sample when n = took about one and a half days. The mem-ory demands were modest. In view of the apparent “concentration” suggested below,one sample was enough to be of interest for our purposes, when n is large.There are classical antecedents of Theorem 1.3. As stated earlier, Schensted [Sc61]proved the analogous conclusion about the shape coming from the Robinson-Schenstedcorrespondence for a permutation w . In contrast, Knuth [Kn70] proved that the firstrow of the Robinson-Schensted-Knuth algorithm ( RSK ) encodes the length of the longestweakly increasing subsequence (
LwIS ) of a word w .What is perhaps less well-known is that RSK also encodes
LDS ( w ) as the length of thefirst column of the shape it associates to w . However, unlike Heckeshape , it is asymmetric:
LIS ( w ) is not encoded by the length of the first row (as LIS ( w ) = LwIS ( w ) in general).Thus, the symmetry of Hecke(shape) makes it natural to analyze, as it seems desirable tosimultaneously capture the statistics of
LIS and
LDS .However, we do not have handy formulas for
Prob ( λ ) , d λ ( q ) or e λ ( n ) , such as the hook-length formula for f λ . (In the permutation case, the hook-length formula plays a crucialrole, see [VeKe77, LoSh77].) In small examples large prime factors appear, showing thatsuch a formula is unlikely. For instance, d ( ) ( ) = = · and e ( ) ( ) = = · . This issue is closely related to the open question of finding “good” determinantalexpressions for Grothendieck polynomials [LaSc82]. That being said, special cases exhibitconnections to work of Stanley [St96] on polygonal dissections, and of [FoGr98] on gen-eralized Littlewood-Richardson rules (further discussion may appear elsewhere). Thus,the enumerative combinatorics of these numbers might of interest in their own right.It is not difficult to give recursions to calculate d λ ( q ) and e λ ( n ) that are useful in mod-erately large cases. Are there efficient (possibly randomized or approximate) countingalgorithms?Objectively, the lack of simple formulas to compute Prob ( λ ) makes it trickier to applystandard approaches directly; this is an admitted defect of our setting. Nevertheless, webelieve the framework of problems described here is tractable. In addition to the results Software available at the authors’ websites. elow, in the Appendix one gains useful and nontrivial information about the Plancherel-Hecke measure by exploiting the related work of [Bi01]. In this way, the techniques of[LoSh77, VeKe77] can be applied to the present context.1.4. Analysis of µ n,q and the limit typical shape. We organize our analysis by first set-ting(4) q = f ( n ) ∈ Θ ( n α ) , where ≤ ,and considering the limit behavior of µ n,q when q → ∞ , as n → ∞ . (The case α = istrivial.) As is explained below, we conjecture that there is a critical value of α , denoted α critical := : the behavior of µ n,q is qualitatively different in the intervals α ∈ ( α critical , 1 ] and α ∈ (
0, α critical ) . At α = α critical = , further refinement of the analysis is needed, aswe transition from one state to the other.In the permutation case, to study the Plancherel measure, it is useful to consider themost likely, or “typical” shape. There, three facts are true. First, in the large limit (andafter rescaling), a well-defined typical shape exists. Second, the expectation of the LIS and
LDS of a large random permutation is encoded respectively in the length of the firstrow and column of the limit shape. Third, the Plancherel measure is concentrated nearthe typical shape.We conjecture that analogues of all three of the aforementioned features also hold forthe Plancherel-Hecke measure.To be more precise, let the typical shape Λ n,q be the shape λ (contained in ( q, q −
1, . . . , 2, 1 ) ) maximizing Prob ( λ ) . This Young diagram Λ n,q can be interpreted as a step-function. It can furthermore be associated to a piecewise linear approximation f n,q :[ ∞ ) → R ≥ . Finally, rescale by ^ f n,q ( x ) := (cid:14) √ n f n,q ( x · √ n ) if α ≥ α critical = f n,q ( x · q ) otherwise. Conjecture 1.6. (I)
For any ≤ α ≤ , there is a unique continuous function Λ ∈ C ([ ∞ ) → R ≥ ) such that for any ǫ > 0 , Prob sup x ∈ R ≥ | ^ f n,q − Λ | > ǫ ! → as n → ∞ . We call this Λ the limit typical shape . (II) A “phase transition” occurs at α critical = :For critical = , Λ is the line (5) y = − x for ≤ x ≤ .For α critical = < α ≤ , Λ is the Plancherel curve , which is parametrically given by (6) x = y + cos θ, y = ( sin θ − θ cos θ ) , for ≤ θ ≤ π .(The curves defined by (5) and (6) are declared to be identically for ≤ x < ∞ .) III)
For α = α critical = : there is a constant C > 0 such that if q = kn + lower order terms , then if k < C , Λ is given by (5). Otherwise, Λ is given by a deformation of (6) which issymmetric across the line y = x . In either case, the x and y intercepts are at ≤ β ( k ) ≤ where E ( LIS ) ≈ β ( k ) √ n and explicitly, β ( k ) = (cid:12) k2 if ≤ − k − if k > 1 . We have reasonable support for the cases (I) and (II) of Conjecture 1.6.
Heuristically , part(II) of the conjecture says that when α is large, and thus q is “close” to n , a random word is“close to” being a random permutation. For a permutation, Schensted and Hecke behavethe same. Hence the Plancherel and Plancherel-Hecke measures ought to be maximizedon the same shape. When α is small, the limit typical shape is a rescaling of the “staircaseshape” which plays a distinguished role in the Edelman-Greene algorithm [EdGr87] andthe Hecke algorithm; see Theorem 1.8 and its proof.Conjecture 1.6(III) is more speculative, since we did not have as much computationalevidence for the shape of Λ . There appears to be a continuous “flattening” of the Plancherelcurve to a line as we tune k from ∞ to . Our data was insufficient to rule out the possi-bility that Λ is simply a rescaling of the Plancherel curve by a factor of β ( k ) . Problem 1.7.
Explicitly describe the deformation of Λ when α = α critical = , as k varies. Our best estimate is that ≤ C ≤ (probably just C = ). However, the values of β ( k ) for k relatively small can be experimentally estimated. The table below was basedon Monte Carlo estimates of E ( LIS ) for n =
50, 000, 100, 000, 200, 000 and andthe estimates were stable throughout this range. They closely agree with the conjecturefor β ( k ) given above. k β ( k ) estimate 0.25 0.50 0.74 0.86 0.94T ABLE
1. Estimates of β ( k ) for the α = α critical = caseNotice that since Prob ( λ ) = Prob ( λ ′ ) where λ ′ is the conjugate shape of λ we know thatif Λ exists and is unique, then Λ is symmetric. This is consistent with the limit curves wepredict.We prove in Section 5 that: Theorem 1.8.
Conjecture 1.6 is true for ≤ α < . More precisely, in this range, a randomshape λ under µ n,q satisfies λ i = q − i + almost surely, as n, q → ∞ . The proof of this theorem depends on the analysis of a certain random walk on thesymmetric group. Our analysis is not sharp enough to extend to the range ≤ α ≤ α critical = , although a refinement might be possible. mpirically, one finds that the first row and column of Λ n,q are approximations of E ( LIS ) and E ( LDS ) that improve as n → ∞ . Therefore, it makes sense to study the asymp-totics of E ( LIS ) and E ( LDS ) as a means to understand the characteristics of Λ n,q . From thispoint of view, the following result supports the phase transition phenomena asserted inConjecture 1.6. Theorem 1.9 (With O. Zeitouni) . If ≤ α < α critical = then lim n →∞ E ( LIS ) = q , whereasif α critical = < α ≤ then E ( LIS ) ≈ √ n . The same statements hold when E ( LDS ) replaces E ( LIS ) . The proof for α < α critical = is a variation on the approach we use to prove Theo-rem 1.8. We also conjectured the answer for α > α critical = ; after showing O. Zeitouniour guess during an early stage in the project, he communicated to us a proof, and kindlyallowed us to reproduce his argument here.In private communication, E. Rains offered a proof that E ( LIS ) = q in the α = α critical = and k ≤ case. Afterward, in the appendix for this paper, O. Zeitouni and the secondauthor present a simple proof that E ( LIS ) ≈ β ( k ) √ n , for all k , in the α = α critical = case, thereby closing the gap in Theorem 1.9. The proof builds on work of [Bi01] (seefurther discussion in Section 1.5). These results further support the belief that C = inConjecture 1.6(III).In addition, we have the following conjecture about the fluctuation of LIS and
LDS . Conjecture 1.10.
Let σ ( LIS ) denote the standard deviation of LIS . For critical = then lim n →∞ σ ( LIS ) = whereas if α critical = < α ≤ then lim n →∞ σ ( LIS ) = O ( n ) . The same statements hold for
LDS . Note that Theorem 1.8 implies Conjecture 1.10 holds for ≤ α < . Tables 2 and 3 givenumerical evidence for Conjecture 1.10 and are consistent with Theorem 1.9.T ABLE n =
50, 000 with
1, 000
Monte Carlo trialsestimate \ α E ( LIS ) σ ABLE n = with Monte Carlo trialsestimate \ α E ( LIS ) σ µ n,q appears “concentrated” near Λ n,q , i.e., the probability of sampling arandom shape differing, in the sup-norm, from Λ after rescaling, by some fixed ǫ > 0 ,goes to as n, q → ∞ . See Figure 2: already at n =
5, 000 we see that two random amples are visibly “close” to one another, and are similar in shape to the third curvewhich is an approximation of the Plancherel curve. By n = the curves appearundeniably to be rescalings of one another, with a rescaling factor of about in ourexperiments. Naturally, as α gets larger, the empirical convergence of the curves occursfaster. F IGURE
2. Two samples at n = = compared with an empiricalapproximation of the Plancherel curve; conjecturally as n → ∞ , the samplecurves converge to one another1.5. Further comparisons with the literature.
As mentioned earlier, the limit distributionof
LIS on permutations, and that of
LwIS on words is well understood.The study of
LIS on W n,q was considered, e.g., in [TrWi01]. In addition, the study of thedistribution of LIS , in the critical case α critical = is implicit in [Bi01, BoOl07]. In [Bi01],an alternative measure on Young diagrams is studied: Schur-Weyl duality implies thatone has the decomposition ( C q ) ⊗ n ∼ = M λ S λ ⊗ V λ , where here S λ is the S n irreducible Specht module and V λ is the GL q ( C ) irreducible Schurmodule . Now taking dimensions one defines a probability measure that assigns to λ thelikelihood ( dim S λ · dim V λ ) /q n . Biane explicitly determines the rescaled limit typical shapein this context. Combinatorially, Biane’s measure arises from the RSK algorithm. Since weknow that
RSK ( w ) encodes the LIS ( w ) in the first column (by reading w backwards), oneexpects, by analogy with [LoSh77, VeKe77] that a certain rescaling of the first columnof Biane’s limit shape is the β ( k ) of Conjecture 1.6. However, to justify this conclusion igorously one needs more work. Further, the fluctuations around Biane’s curve havebeen studied, in the k = case, by Borodin-Olshanski [BoOl07]. Hecke was originally developed in [BuKrShTaYo06] as a generalization of the
Edelman-Greene correspondence which bijects Coxeter reduced words in the symmetric group topairs of tableaux [EdGr87]. Our proof of Theorem 1.3 implies that this algorithm encodesthe
LIS of such words, although the study of
LIS of reduced words appears unmotivated.On the other hand, the Coxeter-theoretic viewpoint on words will be useful in our analy-sis of
LIS , LDS and Λ .1.6. Summary and organization.
In Section 2 we recall the
Hecke algorithm and givean additional example of the results of Section 1.2. We then prove Proposition 1.2. InSection 3, we include two consequences of Theorem 1.3. We split our remaining proofsaccording to the main flavor of technique used: in Section 4 we explain the increasingtableau theory we need from [ThYo07] and prove Theorem 1.3. In Section 5, we utilizeprobabilistic-combinatorial techniques, combined with our main results, to prove Theo-rems 1.8 and 1.9. 2. T HE Hecke
ALGORITHM
The -Hecke monoid. We need to recall some notions used in [BuKrShTaYo06]. The -Hecke monoid H is the quotient of the free monoid of all finite words in the alphabet {
1, 2, . . . , q } by the relations i i ≡ i for all i (7) i j i ≡ j i j for all i, j (8) i j ≡ j i for | i − j | ≥ .(9)There is a bijection between H and the symmetric group S q + . Given any word a ∈H there is a unique permutation π ∈ S q + such that a ≡ b for any reduced word b of π ; see, e.g., the textbook [BjBr05] for basic Coxeter theory for the symmetric group. Inthis case, we write W ( a ) = π and say that a is a Hecke word for π . Indeed, the reducedwords for π are precisely the Hecke words for π that are of the minimum length ℓ ( π ) , the Coxeter length of π (after identifying the label i with the simple reflection s i = ( i i + ) ).Given an additional permutation ρ with Hecke word b , the Hecke product of π and ρ isdefined as the permutation π · ρ = W ( ab ) .The (row reading) word of a tableau T , denoted word ( T ) , is obtained by reading therows of the tableau from left to right, starting with the bottom row, followed by the rowabove it, etc. We also define W ( T ) := W ( word ( T )) . So for example, if T is the increasingtableau of Figure 1, then word ( T ) = .The Hecke algorithm defined in [BuKrShTaYo06] identifies pairs ( w, i ) of words w = w w · · · w n , i = i i · · · i n , where w is a Hecke word and i satisfies i ≤ i ≤ . . . ≤ i n and i j < i j + whenever w j ≤ w j + , with pairs of tableaux ( P, Q ) of the same shape, where P is an increasing tableau such that word ( P ) ≡ w and the content (i.e., multiset of labels) of Q matches the content of i . Werefer to the P -tableau as the insertion tableau and the Q -tableau as the recording tableau . We point out that the “column” convention in [BuKrShTaYo06] differs slightly from the“row” one used here.)2.2.
Description of
Hecke and
Heckeshape . The following description of
Hecke was orig-inally given in [BuKrShTaYo06]:
Description of
Hecke : In this algorithm, one inserts an integer x into an increasing tableau T . We denote this by T ← x . The output is a triple ( U, c, α ) where U is a modification of T (possibly T = U ), c is a corner of U and α ∈ {
0, 1 } is a parameter. Initially, we attempt toinsert x into the first row of T , and an output integer is possibly created which is insertedinto the next row and so on, until no output integer is created. We refer to this finalinsertion as the terminating step , and the previous insertions as bumping steps .Suppose R is a row that we are attempting to insert x into. If x is larger than or equalto all the entries of R , then no output integer is generated and the algorithm terminates:if adjoining x to the end of R results in an increasing tableau U , then set α = and c to bethe new corner added. Otherwise end with the present U , without modification; α = and c is the corner that is at the end of the column containing the rightmost box of R . Onthe other hand, if R contains boxes strictly larger than x , let y be the smallest such box.If replacing y with x results in an increasing tableau, then do so. In either case, y is theoutput integer to be inserted into the next row.Inserting a word w using this algorithm terminates with an increasing tableau P = (((( ∅ ← w ) ← w ) ← · · · ) ← w n ) . The Q tableau is obtained by placing each i j in the c -corner resulting from the insertionof w j . (cid:3) We also have the following reverse insertion algorithm
Hecke − . Description of
Hecke − : Let Z be an increasing tableau, c a corner of Z , and α ∈ {
0, 1 } .Reverse insertion applied to the triple ( Z, c, α ) produces a pair ( Y, x ) of an increasingtableau Y and a positive integer x as follows. Let y be the integer in the cell c of Z . If α = , remove y . In any case, reverse insert y into the row above the corner c .Whenever a value y is reverse inserted into a row R , let x be the largest entry of R suchthat x < y . If replacing y with x results in an increasing tableau, then this is done. Inany case, the integer x is passed up. If R is not the top row, this means that x is reverseinserted into the row above of R ; otherwise x becomes the final output value, along withthe modified tableau.We now complete the description of Hecke . Locate the bottom-most corner with thelargest label, in the Q tableau, and remove the label. If it was the only entry in its corner,remove the corner, set α = . Otherwise set α = . Set c to be this corner. Then reverseinsert ( P, c, α ) . Repeat until all the entries of Q (and P ) have been removed. (cid:3) Hecke is a generalization of the Robinson-Schensted correspondence in the sense thatit agrees with that correspondence whenever w is a permutation in S n . In that case the P and Q tableaux are both standard Young tableaux.In this paper, we are only concerned with the case i j = j . Therefore, we also set Hecke ( w ) := Hecke ( w, 123 · · · n ) and define Heckeshape : W n,q → Y y setting Heckeshape ( w ) to be the common shape of P and Q under Hecke ( w ) . (Analternative description of this map is given in Theorem 4.2 in Section 4.) Example . Let w = ∈ W . Then the reader can check that Hecke produces the following steps: →
45 , 12 →
145 , 123 → → → → → → → → · · · 7 → Here
Heckeshape ( w ) = (
4, 3, 2, 1, 1 ) and indeed the length of the first row of this shapeequals LIS ( w ) = , whereas the length of the first column equals LDS ( w ) = .2.3. Proof of Proposition 1.2.
The claim that µ n,q is a probability distribution follows if Hecke extends to provide a bijection between: W n,q and Γ n,q := [ λ INC ( λ, q ) × SsetVT ( λ, n ) , where λ satisfies (2). ssociate to each word w ∈ W n,q the pair ( w, 123 · · · n ) . Clearly Hecke injectively mapsthese pairs into Γ n,q .To prove surjectivity, let ( P, Q ) ∈ Γ n,q . Then under Hecke − , ( P, Q ) corresponds to somepair ( w, i ) . Now i = · · · n since that is the only possible sequence that can arisefrom a standard tableau Q . Also, since W ( w ) = W ( word ( P )) , w must use some subsetof {
1, 2, . . . , q } . Thus w ∈ W n,q . Hence W n,q ։ Γ n,q . The claim (2) is then clear from theproperties of Hecke .Finally, from the above discussion it is immediate that
Heckeshape is a sampling algo-rithm for µ n,q . The bottleneck of the algorithm is the insertion process (a random uniformword w ∈ W n,q can be generated in O ( n log ( q )) time). By (2) we know that each of the n insertions demand at most (cid:0) q + (cid:1) operations. Hence O ( nq ) operations are needed. Thiscompletes the proof of Proposition 1.2. (cid:3)
3. S
OME FURTHER CONSEQUENCES OF T HEOREM
A generalization of the Erd ˝os-Szekeres theorem.
The following classic result is dueto Erd˝os-Szekeres [ErSz35]:
Theorem 3.1.
Let a, b ≥ . If w ∈ S ab + then LIS ( w ) > a or LDS ( w ) > b . It is known that this result can be readily deduced from Schensted’s results, see, e.g.,[St06, Section 2]. Theorem 1.3 similarly leads to an extension of Theorem 3.1 that relates
LIS and
LDS to Coxeter length.
Proposition 3.2.
Let w ∈ W n,q . Suppose ≤ a, b < q and (10) ℓ ( W ( w )) > a X i = min ( b, q − i ) , or equivalently ℓ ( W ( w )) > b X j = min ( a, q − j ) , then LIS ( w ) > a or LDS ( w ) > b ; recall W ( w ) is the permutation identified with w .Proof. If LIS ( w ) ≤ a and LDS ( w ) ≤ b then by Proposition 1.2 and Theorem 1.3, we have: Heckeshape ( w ) ⊆ ( a × b ) ∩ ( q, q −
1, . . . , 3, 2, 1 ) . Thus | Heckeshape ( w ) | ≤ a X i = min ( b, q − i ) = b X j = min ( a, q − j ) . Since ℓ ( W ( w )) ≤ | Heckeshape ( w ) | , the result then follows. (cid:3) Example . If q = and a = b = then if ℓ ( W ( w )) > 8 then LIS ( w ) > 3 or LDS ( w ) > 3 .This inequality is tight in the sense that the bound cannot be reduced: consider w = . This is already a reduced word of Coxeter length , viewed as an elementof S , and LIS ( w ) = LDS ( w ) = .Proposition 3.2 generalizes Theorem 3.1 because if w ∈ S ab + is viewed as a Heckeword, we have ℓ ( w ) = ab + (any word where all letters are distinct is automaticallyreduced). Then set q = ab + and thus (10) is satisfied. .2. Patience sorting for decks with repeated values.
In [AlDi99], the Schensted corre-spondence was connected to the one-person (solitaire) card game patience sorting . Weinclude a generalization of this connection, which in particular is a refinement of the
LIS claim of Theorem 1.3.In this game, a deck of cards labeled
1, 2, . . . , n is shuffled and the cards are turned upone at a time and dealt into piles on the table: a lower card may be placed on top of ahigher card, or put into a new pile to the right of the existing piles. The goal of the gameis to finish with as few piles as possible.For example, if n = and the deck is shuffled in the order then the top card is dealt onto the table. The can either be placed to the right of the or on top of it – suppose we chose the latter scenario. Next the must be placed to theright of the pile containing the and , starting a new pile. At this stage, we have
28 6 .The greedy strategy is to always place the new card in the leftmost pile possible. If wecomplete the game using this strategy, we would obtain, successively: → →
12 38 6 4 →
12 38 6 4 7 →
12 38 6 4 7 10 →
12 3 98 6 4 7 10
It is easy to prove that the top cards increase from left to right throughout the game;Mallows [Ma73] and later independently Hammersley [Ham72, p. 362] observed that thenumber of piles at the end equals
LIS ( w ) , where w ∈ S n is the permutation defining theshuffled deck. Finally, Aldous-Diaconis note that the first row of the insertion tableauunder Robinson-Schensted agrees with the top cards.Aldous-Diaconis [AlDi99, Section 2.4]) consider two variants of patience sorting wherethe deck has repeated entries, i.e., where all cards of the same rank (e.g., all Jacks) areequal. The two rules they consider are “ties forbidden” and “ties allowed”, depending onwhether or not a Jack can be placed on top of another Jack. They provide an analysis ofthe former case, relating it to the Robinson-Schensted-Knuth correspondence.For example, if the shuffled deck is given by w = ∈ W ,then the result of playing patience (using the greedy strategy) with ties forbidden andallowed, respectively, are: and
21 21 3 41 3 52 4 5 .
Proposition 3.4.
Assume patience sorting is played with ties allowed, on a deck of n cards with q distinct types of cards (viewed as a word w ∈ W n,q ). Then (I) The top cards of each pile at the termination of the game, using the greedy strategy, as readfrom left to right, agree with the top row of the insertion tableau of
Hecke ( w ) . (II) The optimal strategy (minimizing the number of piles created) is the greedy strategy, and
LIS ( w ) piles are created. roof. The proof of (I) is an easy induction, comparing the description of
Hecke with the“tied allowed” rules of patience sorting.For (II), by (I) and Theorem 1.3 we know the “greedy strategy” terminates with
LIS ( w ) piles. The proof of optimality is the same mutatis mutandis as the one for the originalvariant, see [AlDi99, Lemma 1]. (cid:3) Briefly, probabilistic and statistical analysis on the “tied allowed” case of patience sort-ing is possible, in analogy to the work of [AlDi99]. Below we have tabulated the resultsof a Monte Carlo simulation with trials on a standard -card deck. Typically, thenumber of piles is between and . The average number of piles is which, naturally,is less than the average number of piles when the deck is totally ordered, which is asreported by Aldous-Diaconis. So a deck ordering is “lucky” if the number of piles is lessthan , which occurs only about % of the time.number of piles 6 7 8 9 10 11 12 13frequency 82 2993 20336 39039 27843 8489 1166 52T ABLE
4. Monte Carlo simulation for standard 52-card deck with trials. The average number of piles is .Figure 3 shows that there is a definite shape describing the mean pile sizes as α varies.Questions about the structure of this shape may be interpreted as enriched questionsrelated to LIS . One can analyze such questions using the dichotomy of Section 1.4; we donot pursue this here.4. I
NCREASING TABLEAU THEORY AND THE PROOF OF T HEOREM λ = Heckeshape ( w ) computes LIS ( w ) . Let r ( w, t ) be the largest index such that the longest strictly increasing subsequence endingat w r ( w,t ) has length t . For example, if w = then r ( w, 1 ) = , r ( w, 2 ) = and r ( w, 3 ) = . We now in fact prove the following claim, which is stronger than the LIS assertion of Theorem 1.3 (cf. [St99, Prop. 7.23.10]):
Proposition 4.1.
Suppose w ∈ W n,q , s = LIS ( w ) and P is the insertion tableau of Hecke ( w ) .Then the first row of P is given by w r ( w,1 ) , w r ( w,2 ) , . . . , w r ( w,s ) .Proof. By induction on n . The base case n = is trivial.Suppose that the claim holds for w ◦ := w w · · · w n − . Thus if P ◦ is the insertion tableauof Hecke ( w ◦ ) then by induction, the first row is given by w ◦ r ( w ◦ ,1 ) , w ◦ r ( w ◦ ,2 ) , . . . , w ◦ r ( w ◦ ,s ◦ ) , where s ◦ = LIS ( w ◦ ) .We consider the possibilities of what happens as we insert w n into the first row of P ◦ .We prove the desired conclusion holds for the case that w ◦ r ( w ◦ ,t ) < w n for some maximallychosen t < s ◦ ; other cases are similar. So after inserting w n , the first row of P is w ◦ r ( w ◦ ,1 ) , w ◦ r ( w ◦ ,2 ) , . . . , w ◦ r ( w ◦ ,t ) , w n , w ◦ r ( w ◦ ,t + ) , . . . , w ◦ r ( w ◦ ,s ◦ ) . The assumption shows that the longest increasing subsequence ω in w using w n is oflength at least t + , since we can adjoin w n to the length t subsequence ending at w ◦ r ( w ◦ ,t ) . F IGURE
3. Simulation of mean pile sizes for n = and q = with Monte Carlo trials. The x -axis indicates the po-sition of the pile, counting from the left.On the other hand, ω is of length at most t + since if it were, say, of length t + , theremust be a length t + increasing subsequence in w ◦ ending at w r with w r < w n and r < r ( w ◦ , t + ) . But this is a contradiction of the definition of r ( w ◦ , t + ) since we wouldthen have a length t + increasing subsequence ending at w ◦ r ( w ◦ ,t + ) ( ≥ w n ) .Since we have just shown w n = w r ( w,t + ) , it is now clear that w r ( w,h ) = w ◦ r ( w ◦ ,h ) for h = t + . Thus, the first row of P satisfies the desired claim, and the induction follows. (cid:3) In order to prove that the first column of λ computes LDS ( w ) , we need to draw a con-nection to [ThYo07], where we developed a K -theoretic jeu de taquin theory. Rather thanrepeat the setup in full here, for brevity, we refer the reader to that paper for the completebackground on K - rectification and K - infusion used below.Although what follows also constitutes a proof of our LIS claims, we felt that includingthe direct proof via the stronger claim of Proposition 4.1 was worthwhile. However, asimilarly direct proof of our
LDS claim seems harder.Let γ n = ( n, n −
1, n −
2, . . . , 3, 2, 1 ) be the staircase shape. Also, let λ perm ( n ) = γ n /γ n − be the permutation shape consisting of n single boxes arranged along an antidiagonal. Given w ∈ W n,q , define T w ∈ INC ( λ perm ( n ) ) to be the tableau where w , w , . . . , w n is arranged from southwest to northeast. Also let S ∈ SYT ( γ n − ) be the superstandard Young tableau, i.e., the one whose first row is labeled
1, 2, . . . , n − , the second row is labeled by n, n +
1, n +
2, . . . , 2n − etc. This latter tableau etermines a particular K - rectification of T w , which by definition is K - infusion ( S, T w ) ,see [ThYo07, Section 3]. (An important subtlety in K -theoretic jeu de taquin is that K - rectification depends on the order in which it is performed, unlike the rectification ofclassical jeu de taquin. However, the order defined by S is particularly nice.)The following result is an analogue of a classical result linking the Robinson-Schenstedalgorithm to the (ordinary) rectification of T w : Theorem 4.2.
Let w ∈ W n,q . Then K - infusion ( S, T w ) is the insertion tableau of Hecke ( w ) .Proof. We induct on n . The base cases n =
1, 2 are easy. We may assume that the steps ofthe K - infusion that are defined by the “inner” labels n, n +
1, n +
2, . . . , (cid:0) n2 (cid:1) results in askew shape of the form P ◦ ⋆ w n , as depicted below:(11) · · · n − n P ◦ P ◦ · · · P ◦ − P ◦ · · · P ◦ − · · · P ◦ n − P ◦ n − . The induction hypothesis is that P ◦ is the insertion tableau obtained by Hecke inserting w w · · · w n − . (In the depiction of P ◦ from (11), some of the boxes with labels P ◦ i,j may beempty.) The non-underlined labels occupy the boxes of P ◦ ⋆ w n , whereas the underlinedlabels dictate the remaining steps to perform to complete the K - infusion computation(these steps are recalled below).Hence it remains to show that the tableau obtained by the Hecke -insertion P ◦ ← w n isthe same as carrying out the K - infusion indicated by (11), i.e., the operation(12) K - infusion · · · n −
1, P ◦ ⋆ w n . To do this, we first develop a technical fact. In [ThYo07, Section 1.1], we defined the pro-cedure switch , which we restate now (in a more convenient form). Let
Mixedtab ( α, p, q ) be the set of mixed tableaux , which, by definition, are tableaux of shape α , each of whoseboxes is filled with an entry from one of two alphabets, {
1, . . . , p } and {
1, . . . , q } , such that,within each row or column, the entries for each alphabet appear at most once. (No in-creasingness condition is demanded.) We also include the null tableau ∅ , as a specialelement of Mixedtab ( α, p, q ) .Define an operator switch ( i, j ) : Mixedtab ( α, p, q ) → Mixedtab ( α, p, q ) as follows. Given ∅ 6 = T ∈ Mixedtab ( α, p, q ) , consider the subshape S of T consistingof boxes whose entry is either i or j . For each non-singleton connected component of S , nterchange the i ’s and the j ’s. If this results in a (non null) mixed tableau, then the resultis that tableau. Otherwise the result is ∅ . By definition switch ( i, j )( ∅ ) = ∅ . Example . Let α = (
4, 3, 1 ) and p = q = . Then T ∈ Mixedtab ( α, p, q ) is given below,together with two different switch computations applied to it: T ∈ switch (
1, 2 )( T ) = switch (
3, 1 )( T ) = On the other hand, switch (
1, 2 ) = ∅ . The following lemma is easy to verify from the definitions:
Lemma 4.4. If i = j and r = s then the operators switch ( i, r ) and switch ( j, s ) commute, i.e., switch ( i, r ) switch ( j, s ) ≡ switch ( j, s ) switch ( i, r ) is a relation in the algebra generated by switch operators on Mixedtab ( α, p, q ) . The procedure described in [ThYo07, Section 3] for computing K - infusion ( A, B ) is toconsider the entries of A as being underlined, where the maximum entry of A is p , say,and the entries of B as not underlined, where the maximum entry of B is q . Now performthe following sequence of switch operations, from left to right, as indexed by:(13) ( p, 1 ) , ( p, 2 ) , . . . , ( p, q ) , ( p −
1, 1 ) , . . . , ( p −
1, q ) , . . . , . . . , (
1, 1 ) , . . . , (
1, q ) , We refer to this sequence of pairs (interchangeably, the corresponding sequence of switch operators) as the standard switch sequence .The technical fact we need is that K - infusion can in fact be computed differently: a switch sequence is called viable if it is a “shuffling” of (13), in the following sense: • every ( i, j ) occurs exactly once, for ≤ i ≤ p and ≤ j ≤ q ; • for any ≤ i ≤ p , the pairs ( i, 1 ) , . . . , ( i, q ) occur in that relative order; and • for any ≤ j ≤ q the pairs ( p, j ) , . . . (
1, j ) occur in that relative order.This definition is explained by the proof of the following proposition: Proposition 4.5.
Any viable switch sequence can be used to calculate K - infusion .Proof. It is straightforward to show that one can obtain any viable switch sequence fromthe standard switch sequence (13) by repeated applications of the commutation relationof Lemma 4.4. (cid:3)
Thus, in view of Proposition 4.5, to complete the induction it suffices to construct aviable switch sequence whose result is the same as P ◦ ← w n . (A caution: in [ThYo07]it was shown that the standard switch sequence necessarily maintains increasingnessalong rows and columns of the members of each alphabet. We will not prove that aviable switch sequence also achieves this during the intermediate steps of a K - infusion .However, this does not play a logical role in how we apply Proposition 4.5 below.) et y := w n , and for i > 1 , let y i be the number which is inserted in row i accordingto Hecke insertion of w n into P ◦ . During a bumping step of Hecke insertion, let z i be thesmallest number already in row i which is greater that y i .We say that a mixed tableau, obtained after some number of switch operations appliedto P ◦ ⋆ w n , is in row i normal form if: • the i -th row is of the form · · · k − i k · · · t , where y i has not yet moved from its initial position in column k , or · · · k − + · · · t , depending, respectively, on whether the Hecke insertion P ◦ ← w n terminates atrow i or after, or strictly earlier. Here the i -th row of P ◦ has length t ; and • all non-underlined symbols in rows i + and below have not moved from theirinitial positions.Having explained our general strategy, what remains is some tedious but straightfor-ward case analysis to describe the viable switch sequence we use: Our initial mixedtableau is in row normal form. Now, suppose we have arrived, after some sequenceof applications of (A), (B) and (C) below, at a mixed tableau in row i normal form. Thereare three possibilities for a bumping step of Hecke insertion of y i .(A) y i is inserted, bumping z i : Consider the switch sequence that moves y i to the leftalong row i : specifically, using(14) switch ( k −
1, y i ) , switch ( k −
2, y i ) , switch ( k −
3, y i ) , . . . , until it is directly above the z i in row i + , and then, starting from the right, swaps eachbox in row i having an underlined label with the one directly below, which has a non-underlined label (this can always be done since no label numerically equal to y i appearsamong the latter boxes, by assumption). The result is therefore in row i + normal formsince z i doesn’t move in this process, it is the unique box with a non-underlined label inrow i + , and y i + = z i , as demanded.Note that the non-underlined labels of the i -th row of the mixed tableau we obtain afterthis process is the same as the i -th row of P ◦ , with z i replaced by y i , as desired. Example . If i = and we started with so that y i = and z i = , then we begin by moving y i above z i = y i + : e conclude with switch (
3, 5 ) , switch (
2, 2 ) and then finally switch (
1, 1 ) , resulting in which is in row i + ( = ) normal form. Moreover, the non-underlined labels in row i = of this mixed tableau, namely agrees with the first row of ← y i ,as desired.(B) y i is not inserted because of a horizontal violation: (That is, a label numerically equalto y i already appears in the row that we are Hecke inserting into.) Proceed as in (A)by moving y i to the left, until it is directly above z i , i.e., apply (14). Now, “locally” thesituation in the column containing z i and the column to its left is: · · · t y i y i z i · · · or · · · tt y i y i z i · · · , for some t . The former case shows rows i and i + when t does not appear in row i − ,while the latter case also includes row i − if it does. To the right of the column containing z i , we swap boxes with underlined and non-underlined labels, as in (A). Then we performthe transformation(15) · · · t y i y i z i · · · 7 → · · · y i tt z i · · · or · · · tt y i y i z i · · · 7 → · · · y i y i tt z i · · · respectively. After this transformation, we complete by swapping, right to left, the boxesto the left of the y i , as in (A). The result is in the demanded row i + normal form.Note that unlike (A), row i still has a box c with an underlined label in it. However,when we work on the row i + normal form, the reader can check that our descriptionswill force that box c to be filled by z i : e.g., if we apply case (A) next, this will occur whenwe execute the switches (14), if we apply case (B) next, this will occur when we executeeither the switches (14) or (15), whereas if we apply (C) next, the replacement will occurduring the switches (16) below. Hence, in the end, we find that the non-underlined labelsin row i are the same as the i -th row of the Hecke insertion of w n into P ◦ , as desired. Example . If i = and we started with then moving the “ ” in row to the left, as in (A), gives he remaining swaps give → → → and the latter is in row i + = normal form. The mild complication of this case, assuggested above, is that row is not yet the same as the first row of the insertion tableauof Hecke .However, as we begin to work on this row normal form, we start by using switch (
2, 3 ) (beginning to move the in row to the left), giving: Hence row now does agree with Hecke insertion.(C) y i will not be inserted because of a vertical violation (and a horizontal violation does notoccur): While
Hecke inserting y i into row i of P ◦ , z i is directly below a label (numericallyequal to) y i that is in row i − . This prevents one from replacing z i by the y i we are in-serting. (Note that this case can only occur if i > 1 .) Moreover, since there is no horizontalviolation, the number immediately to the left of z i , say b , satisfies b < y i . Notice, in orderfor a vertical violation to occur, the previous step of Hecke inserting y i − must have beenan instance of case (B) or (C) (since we are Hecke inserting a number into row i which alsoappears in row i − ). Hence the box directly above and to the left of the y i contains anunderlined label t .Now we begin by switching all the boxes with underlined labels in row i to the right of y i , with the non-underlined labels directly below them. The remaining switches, whichinvolve the rows i − to i + , in the column of y i and the column to its immediate left,are as given as follows:(16) · · · tt y i b z i · · · 7 → · · · tb y i t z i · · · 7 → · · · y i b tt z i · · · . As in (B), we finish by completing a sequence of swaps involving the columns to the leftof the b . The result of this process is a tableau in row i + normal form.As at the conclusion of (B), row i of the resulting row i + normal form mixed tableau,still contains an underlined label. As in that case, this label will be switched with z i , asdesired, during the forthcoming switches . Example . To give an example of (C), the previous insertion must have been of type (B)or (C), so consider the following example:
After the first step, an insertion of type (B), we reach row 2 normal form: As compelled by the conditions of a viable switch sequence, we switch with beforewe switch with 5, and as a result, as described above, we get to row 3 normal form: The final result is:
Again, this tableau agrees with P ◦ ← .Note that after each of (A), (B) and (C), when we reach row i + normal form, z i + isnecessarily in a column weakly to the left of y i + , because P ◦ is an increasing tableau and y i + together with the rows strictly below row i + are, at this point, still unaltered. Thisobservation guarantees that the switches as described above can actually be executed, i.e.,the above descriptions are well-defined.Using similar analysis one can give switch sequences for the terminating steps of Hecke insertion, such that one maintains row i normal form for all i ≤ ℓ + , where ℓ is thenumber of rows of P ◦ . We leave the straightforward details to the reader.These constructions then show, by induction on the number of rows ℓ , that we have asequence of switch operations transforming P ◦ ⋆ w n into P ◦ ← w n . Conclusion of the proof of Theorem 4.2: by the fact that P ◦ is an increasing tableau, and bythe definition of normal form, it is easy to see that the sequence of switch operations usedforms a viable sequence, after suitable insertions of any trivial switch ( i, r ) operators (thatis to say, switch operations which do not have any effect on the tableau). We can thereforeapply Proposition 4.5 as we claimed earlier. (cid:3) Example . Continuing Example 4.8, the switch sequence we obtain, by following thedescriptions of the cases (B) and (C) that are needed is: (
3, 2 ) , (
2, 3 ) , (
2, 4 ) , (
2, 5 ) , (
2, 6 ) , (
1, 6 ) . This is not quite a viable sequence: although our constructions guarantee that it satisfiesthe second and third conditions to be a viable sequence, it fails the first, since, e.g., (
3, 1 ) doesn’t appear in the sequence, since this switch is never needed. However, clearly wecan simply insert this trivial switch , along with the others that are missing, giving theviable sequence: (
3, 1 ) , ( , ) , (
3, 3 ) , (
3, 4 ) , (
3, 5 ) , (
3, 6 ) , (
2, 1 ) , (
2, 2 ) , ( , ) , ( , ) , ( , ) , ( , ) , (
1, 1 ) , (
1, 2 ) , (
1, 3 ) , (
1, 4 ) , (
1, 5 ) , ( , ) . he action of this viable sequence on the original mixed tableau is therefore the same asthe original switch sequence, which we highlight in boldface. This viable sequence alsohappens to be the standard switch sequence, although it needn’t be in general. Hence K - infusion = ← in agreement with Theorem 4.2.In [ThYo07, Theorem 6.1] we showed that the first row of K - infusion ( R, T w ) has length LIS ( w ) , for any increasing tableau R of shape γ n − . So by Theorem 4.2, the first row of Heckeshape ( w ) = K - infusion ( S, T w ) has length LIS ( w ) . By symmetry, [ThYo07, Theo-rem 6.1] also implies that the first column of K - infusion ( R, T w ) has length LDS ( w ) . Hencethe LDS claim follows. This completes the proof of Theorem 1.3. (cid:3)
Given w = w w · · · w n , define rev ( w ) = w n w n − · · · w . The following is symmetrystatement is immediate from Theorem 1.9, since LIS ( w ) = LDS ( rev ( w )) : Corollary 4.10.
Let λ = Heckeshape ( w ) and µ = Heckeshape ( rev ( w )) . Then λ = µ ′ and µ = λ ′ where λ ′ and µ ′ are the conjugate shapes of λ and µ respectively. A warning is needed: unlike Robinson-Schensted correspondence setting, with
Hecke ,one cannot conclude that the insertion tableaux associated to w = w w · · · w n and rev ( w ) = w n w n − · · · w differ only by a reflection across the main diagonal. A counterexample is w = . (In [Sc61], the symmetry property of the Robinson-Schensted correspon-dence, was applied to prove the LDS claim in the classical version of Theorem 1.3.)
Problem 4.11.
Give an explicit description of
Hecke ( rev ( w )) in terms of Hecke ( w ) . Finally, Greene [Gr74] has given an explanation of the other rows of the shape λ associ-ated to a permutation w ∈ S n under the Robinson-Schensted correspondence: λ + . . . + λ i equals the maximal size of a union of i disjoint increasing subsequences of w .However, we could not find any extension of Greene’s theorem in the Hecke context.The naive tries do not work: Since | λ | ≤ n , the simplest case to analyze is when | λ | = n .The example w = corresponds to λ = (
3, 2 ) ; this shows that it is not validto merely replace “increasing” by “strictly increasing” in Greene’s theorem, since thatwould predict λ = (
3, 1, 1 ) .5. P ROBABILISTIC COMBINATORICS AND PROOFS OF T HEOREMS
AND
Proof of Theorem 1.8:
Let w = (cid:18) − + + − (cid:19) be the word in S q + of maximal Coxeter length. Hence ℓ ( w ) = (cid:0) q + (cid:1) . This is the unique permutation in S q + with this length.We need the following lemma, which characterizes when Heckeshape ( w ) is maximized. Lemma 5.1.
For w ∈ W n,q , Heckeshape ( w ) = ( q, q −
1, . . . , 3, 2, 1 ) if and only if W ( w ) = w . roof. First suppose W ( w ) = w . Under Hecke ( w ) , the insertion tableau P satisfies W ( word ( P )) = w . Hence the shape of P has at least (cid:0) q + (cid:1) boxes and so by Proposition 1.2it must be ( q, q −
1, . . . , 3, 2, 1 ) .Conversely, if Heckeshape ( w ) = ( q, q −
1, . . . , 3, 2, 1 ) , then note that there is a uniqueincreasing filling P of that shape (using
1, 2 . . . , q in the first row,
2, 3, . . . , q in the secondrow, etc). Then it is well-known that W ( word ( P )) = w . (cid:3) In view of the Lemma 5.1, the Theorem will follow if we can show that(17) W ( w ) = w almost surely, as n → ∞ . (We conjecture this to be true whenever ≤ α < α critical = . This would imply Conjec-ture 1.6 for this entire range.)Set w ( k ) := w · · · w k . Then either ℓ ( W ( w ( k ) w k + )) = ℓ ( W ( w ( k ))) + or ℓ ( W ( w ( k ))) depending on whether the simple reflection W ( w k + ) (say equal to s t = ( t t + ) ) isan ascent of W ( w ( w )) or not. (An ascent occurs at a position t for a permutation π if π ( t ) < π ( t + ) .)Provided that π = w , π has at least one ascent. Thus when W ( w k ) = w , the probabil-ity that w k + ’s introduction increases the Coxeter length is at least .Let E k = the event that ℓ ( W ( w ( k ))) < (cid:0) q + (cid:1) . Related to this, let Y i ∈ {
0, 1 } be Bernoulli distributed with parameter . Set Z k := Y + · · · + Y k . Clearly,(18)
Prob ( E k ) ≤ Prob (cid:18) Z k < (cid:18) q + (cid:19)(cid:19) . We now show that when k = O ( q + ǫ ) , for the righthand side of the inequality (18) goes to zero as q → ∞ .This is a simple application of (a special case of) Bennet’s large deviation inequality,see, e.g., [DeZe02, Cor. 2.4.7]: suppose X i are independent, mean zero random variableswith | X i | ≤ . Set S k = P ki = X i . Then for y ≥ we have(19) Prob ( k − S k ≥ y ) ≤ e − y22 . To apply this to our setting, let X i = − Y i + . Hence S k = − Z k + kq . Then with a = o ( q ) Prob ( Z k < a ) = Prob (cid:18) − S k < a − kq (cid:19) = Prob (cid:18) S k ≥ kq − a (cid:19) ≤ Prob (cid:18) S k > k2q (cid:19) (for q large, since a = o ( q ) ) = Prob k − S k ≥ √ k2q ! ≤ e − k8q2 → as q → ∞ . he result then follows. (cid:3) In the above argument, we interpreted w ∈ W n,q as a random walk in S q + that beginsat the identity and works its way up in the weak Bruhat order to w . At each step theprobability of going up is at least (as we have used), but is larger in general. How-ever, since this probability varies, even for permutations with the same Coxeter length, amore refined analysis is needed to push the argument we have used further, up towards α critical = . Proof of Theorem 1.9:
For ≤ α < α critical = we will apply an argument similar to that forTheorem 1.8.Given u ∈ W n,q let m ( u ) = max t ≥
1, 2, . . . , t is a subsequence of u. Let w ( k ) = w · · · w k and set E k = the event that m ( w ( k )) < q. Provided E k occurs, then m ( w ( k ) w k + ) = m ( w ( k )) + with probability , and is equal to m ( w ( k )) otherwise.Let { Y i } and Z k = Y + · · · + Y k be discrete random variables, where Y i is Bernoullidistributed with parameter . Now, Prob ( E k ) = Prob ( Z k < q ) . Thus it will be enough to show that when k = O ( q + ǫ ) for ǫ > 0 then Prob ( Z k < O ( q + ǫ )) → as q → ∞ . This is another application of the large deviation inequality (19).For α critical = < α ≤ we use a proof provided for us by O. Zeitouni: E ( LwIS ) ,the expected length of the longest weakly increasing subsequence of w ∈ W n,q (with α > α critical = ) is known to satisfy E ( LwIS ) ≈ √ n ; see [Joh01, Theorem 1.7]. The argument shows that the difference between the LIS and
LwIS of w is typically small.Let LwIS a,b be the random variable for the value of
LwIS ( w ) of a random uniform word w ∈ W ⌊ a ⌋ , ⌊ b ⌋ where ⌊ a ⌋ is the integer part of a , etc. Similarly define LIS a,b where
LIS replaces
LwIS .Fix ǫ > 0 and let L = L ( ǫ ) be large enough such that(20) inf L>L lim n →∞ Prob (cid:16)
LwIS L ( − ǫ ) , qL √ n > 2 ( − ) L (cid:17) > 1 − ǫ. We need a “graphical” representation of a word in W n,q : consider a q × n rectangle sub-divided into unit squares. In each of the n columns, one places a single “dot” in one ofthe q rows. The set of such configurations is in obvious bijection with words in W n,q .Given L , draw √ n/L smaller rectangles of dimension qL √ n × L √ n along an antidiagonalinside the q × n rectangle, as depicted in Figure 4 below. L √ n qL √ n B · · · F IGURE α > α critical = case of proof of Theorem 1.9Label the i -th southwest most box B i . Let N i be the random variable giving the numberof dots in B i . Notice that the N i ’s are independent. Also, the dots inside B i define a word,and we can speak of LIS ( B i ) and LwIS ( B i ) , the length of the longest strictly (respectively,weakly) increasing subsequence of that word.Say that B i is good if the following conditions simultaneously hold:(a) N i ≥ L ( − ǫ ) ;(b) LwIS ( B i ) ≥ ( − ) L ; and(c) no two dots in B i have the same height (hence LwIS ( B i ) = LIS ( B i ) ).Now, we have E ( N i ) = L √ n × (cid:18) qL √ n × (cid:19) = L and we claim for an L sufficiently large, for L ≥ L , and for all n large, we have: Prob ( N i ≤ L ( − ǫ )) ≤ ǫ. The proof is a standard argument: let Y k be the indicator random variable which evaluatesto if column k has a dot that lies in the box B i that occupies that column (hence withprobability L/ √ n ), and evaluates to otherwise. Hence Prob ( N i ≤ L ( − ǫ )) = Prob ( L √ n X k = Y k ≤ L ( − ǫ ))= Prob ( L √ n X k = ( Y k − E ( Y k )) < L ( − ǫ ) − L )= Prob ( L √ n X k = ( Y k − E ( Y k )) < − L ǫ ) ≤ L √ n · ( L/ √ n ) L ǫ = ǫ , where the previous line is an application of Chebyshev’s inequality. Now take L suffi-ciently large (bigger than some L ) so ǫ ≤ ǫ .Assuming L ≥ L , if the event (a) occurs, then with probability at least − ǫ , when n islarge, (b) holds, because of the definition of L using (20). he probability of the event (c) not occurring is bounded above (using a union bound)by rows × Prob ( two dots share a given height ) ≤ qL √ n × ( L √ n ) × ( ) = L √ nq → because n = o ( q ) .So, Prob ( B i is good ) ≥ − for L > max ( L , L ) and n large.Another standard argument with Chebyshev’s inequality shows with high probability,say at least − ǫ , and for n large, the number of good boxes is √ nL ( − ǫ )( − ) ≥ √ nL ( − ) . Hence, with that probability, for w ∈ W n,q LIS n,q ≥ √ nL ( − ) × ( − ) L ≥ √ n ( − ) . Since √ n ( − ) ≤ E ( LIS n,q ) ≤ E ( LwIS n,q ) ≈ √ n the α > α critical = case follows by taking ǫ → , completing the proof of the theorem. (cid:3)
6. A
PPENDIX ( BY A. Y
ONG AND
O. Z
EITOUNI )The goal of this appendix is to present a proof of the following result:
Theorem 6.1.
Let q = k √ n + lower order terms. Then E ( LIS ) ≈ β ( k ) √ n where β ( k ) = (cid:12) k2 if ≤ − k − if k > 1 . However, in order to prove this statement, we need to work with another variant ofPlancherel measure, utilized, e.g., by [Bi01] and alluded to in Section 1.5 of the main text.Our approach parallels the one developed in [LoSh77, VeKe77] to prove E ( LIS ) = √ n inthe permutation case, by utilizing work of [Bi01].6.1. Preliminaries. A semistandard Young tableau of shape λ ∈ Y with labels from {
1, 2, . . . , q } is a filling of the Young shape λ with these labels so that the entries weaklyincrease along rows, and strictly increase along columns. For example, if λ = (
2, 1 ) and q = there are two such tableaux: and . Let g λ ( q ) denote the number of suchtableaux. efine the Plancherel-
RSK measure ν n,q on the set Y n of Young diagrams λ with n boxes, by declaring that a random Young shape λ n,q occurs with probability Prob ( λ n,q = λ ) = n f λ g λ ( q ) . We make no claims of originality in this definition. Indeed, this is the same measurestudied in, e.g., [Bi01]; although there the measure is defined in terms of dimensions ofirreducible S n and GL n ( C ) modules associated to λ ; the equivalence is well-known. Thefact that ν n,q is in fact a probability distribution follows from either Schur-Weyl duality,as in Section 1.5, or by the RSK algorithm, see, e.g.,[St99, Section 7.11].A crucial advantage of ν n,q for the purposes of understanding E ( LIS ) , in comparison toPlancherel-Hecke measure, is that both f λ and g λ ( q ) have simple multiplicative formulas.This makes it more readily analyzed using ideas of [LoSh77, VeKe77], which we modifyto the present setting.Given a box u ∈ λ , define the hook-length associated to u to be H ( u ) := A ( u ) + L ( u ) + where A ( u ) is the number of boxes strictly to the right of u , and in the same row, and L ( u ) is the number of cells strictly below u and in the same column. Then we have [St99,Chapter 7], the hook-length formula and hook-content formulas , respectively: f λ = n ! Q u ∈ λ H ( u ) and g λ ( q ) = Y u ∈ λ q + C ( u ) H ( u ) , where in the second formula C ( u ) is the content of u , the column index of u minus therow index of u . So for example, if λ = (
4, 3, 2 ) , the contents are given by − − − Plancherel-
RSK as a Markov measure. Young’s lattice is the poset structure on Y where λ ≤ µ if the shape of λ is contained in the shape of µ . We write λ → µ to denote acovering relation in this poset, i.e., where µ is obtained from λ by adding a single box at acorner.Define a Markov process on Y with the transition probabilities Prob ( λ → µ ) = g µ ( q ) qg λ ( q ) . We need the following lemma, that in particular shows that Plancherel-
RSK measure is aMarkov measure with the above transition probabilities.
Lemma 6.2. (I) P Λ : λ → µ Prob ( λ → µ ) = (II) P λ : λ → µ Prob ( λ → µ ) ν n,q ( λ ) = ν n + ( µ ) Proof.
The claim (I) is equivalent to X Λ : λ → µ g µ ( q ) = qg λ ( q ) . his follows from the following Pieri rule for Schur polynomials X Λ : λ → µ s µ ( x , . . . , x q ) = s ( ) ( x , . . . , x q ) · s λ ( x , . . . , x q ) . See [St99, Theorem 7.15.7]. Here s λ ( x , . . . , x q ) = X T x T is the Schur polynomial , where the sum is over all semistandard Young tableaux of shape λ with entries from {
1, 2, . . . , q } , x T = x i x i · · · x i q q , and i j is the number j ’s used in T . Inparticular, (I) is immediate from g λ ( q ) = s λ (
1, 1, . . . , 1 ) .For (II), the claim is X λ : λ → µ (cid:18) g µ qg λ (cid:19) (cid:18) f λ g λ q n (cid:19) = f µ g µ q n + , that is, P λ : λ → µ f λ = f µ , which is well-known (and straightforward from the definitions). (cid:3) Conclusion of Proof of Theorem 6.1.
Work of Biane [Bi01, Theorem 3] describes thetypical shape under Plancherel-
RSK after the rescaling f → √ n f ( √ n ) . Biane’s theoremimplies that(21) E ( LIS ) ≥ β ( k ) but not(22) E ( LIS ) ≤ β ( k ) . Briefly, we explicate how his work applies to our situation (the reader is directed to theoriginal source for details): Biane works with the coordinate axes rotated -degrees coun-terclockwise, as in, e.g., [VeKe77, VeKe85]. His aforementioned theorem states that if { f n } ∞ n = is any sequence of (rescaled and rotated) Young diagrams, then for any ǫ > 0 wehave(23) lim n →∞ Prob (cid:18) sup u ∈ R | f n ( u ) − P ( u ) | > ǫ (cid:19) → where the probability is computed with respect to ν n,q . Also, P is Biane’s limit shape,which has the property that it meets the line y = x at a distance β ( k ) from the origin. Inother words, the “first column” C of P satisfies(24) C = β ( k ) . For each n , let C n be the length of the first column of f n . From (23) it follows that for any ǫ > 0 ,(25) lim n →∞ Prob ( C n < C − ǫ ) → Moreover, since it is known that for any w ∈ W n,q the first column of the Young dia-gram associated to RSK ( w ) equals LDS ( w ) , (21) follows immediately from (24) and (25)combined. ote that the above argument does not also prove (22) since (23) does not rule out thepossibility that { f n } ∞ n = consists of Young diagrams with “tails” along the y = x axis thatboth “lengthen” and “thin out” as n → ∞ . Therefore, it remains to verify (22).To do this, we modify an argument found in [VeKe85], which establishes the analogousassertion in the permutation case: consider the set Y ∞ of all sequences of Young diagrams λ = ( λ ( ) , λ ( ) , λ ( ) , . . . , λ ( i ) , . . . ) where λ ( i ) → λ ( i + ) for i ≥ .For a Young diagram λ , let λ ↓ denote the diagram obtained by adding a single box to λ ,in the first column. For each integer i ≥ , define the indicator function ψ i : Y ∞ → {
0, 1 } by setting ψ i ( λ ) = if λ ( i ) = ( λ ( i − ) ) ↓ , and setting ψ i ( λ ) = otherwise.Studying the expectation of ψ i we have: E ( ψ i ) = X λ ν i − ( λ ) · Prob ( λ → λ ↓ ) ! ≤ X λ ν i − ( λ ) · Prob ( λ → λ ↓ ) (Cauchy-Schwarz inequality) = X λ ν i − ( λ ) f λ ↓ g λ ↓ ( q ) f λ g λ ( q ) · · · g λ ↓ ( q ) /g λ ( q ) f λ ↓ /f λ = X λ ν i,q ( λ ↓ ) g λ ↓ ( q ) /f λ ↓ g λ ( q ) /f λ , where we have just used ν i,q ( λ ↓ ) = ν i − ( λ ) f λ ↓ g λ ↓ ( q ) f λ g λ ( q ) ·
1q .
Let L ( λ ) := g λ ( q ) /f λ . Note that by the hook-content formula we have L ( λ ↓ ) /L ( λ ) = Q u ∈ λ ↓ q + c ( u ) i ! Q u ∈ λ q + c ( u )( i − )! = q − λ ′ i where λ ′ is the length of the first column of λ .Summarizing, we have(26) E ( ψ i ) ≤ X λ ν i,q ( λ ↓ ) q − λ ′ i = ( q − γ i ) , where γ i denotes the expectation of ( λ ( i ) ) ′ , i.e., the expected length of the first column ofa random shape with i boxes, drawn under the Plancherel- RSK measure.Notice also that since ψ i is an indicator random variable, we have(27) E ( ψ ) = E ( ψ i ) = γ i − γ i − . herefore, combining (26) and (27) we obtain, by the Cauchy-Schwarz inequality, thefollowing difference inequality:(28) γ i − γ i − ≤ s √ q − γ i . We claim that γ i ≤ β ( i ) √ n .To prove this, note the following facts about γ i :(a) γ i + ≥ γ i ; and(b) γ i ≤ q .Now define a linear interpolation: for t ∈ [ i/q, ( i + ) /q ] , set(29) β t = γ i q + q (cid:18) t − iq (cid:19) (cid:18) γ i + q − γ i q (cid:19) . Note that for such t , p − γ i /q ≤ √ − β t . Therefore, combining equations (28) and (29)we obtain(30) ddt β t = γ i + − γ i ≤ √ qt p − β t , β = Since γ i , γ i + ≤ q we have β t ≤ , hence the above differential inequality is equivalent to − ddt p − β t ≤ ( p qt ) . Hence it follows that p − β t ≥ p − − p t/q + ≥ p − − p t/q . That is, β t ≤ p t/q − ( t − ) /q . Now we care about t = n/q = √ n/k . We always have β n/q ≤ (trivially), but for k > 1 we have the better inequality β n/q ≤ − + . Therefore, γ n = qβ n/q ≤ q , for k ≤ and γ n = qβ n/q ≤ ( − ) √ n + for k > 1 . The result then follows. (cid:3) A CKNOWLEDGMENTS
HT was supported by an NSERC Discovery Grant. AY was supported by NSF grantDMS 0601010 and a U. Minnesota DTC grant during the Spring 2007; he also utilized theresources of the Fields Institute, and of Algorithmics Incorporated, in Toronto, while avisitor. We would like to thank Ofer Zeitouni for allowing us to include his proof for the α > α critical = case of Theorem 1.9 and for his extensive help during this project. We alsothank Alexander Barvinok, Nantel Bergeron, Alexei Borodin, Sergey Fomin, ChristianHoudr´e, Nicolas Lanchier, Igor Pak, Eric Rains, Mark Shimozono, Richard Stanley, DennisStanton, Craig Tracy and Alexander Woo for helpful correspondence. EFERENCES [AlDi99] D. Aldous and P. Diaconis,
Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson theorem , Bull. Amer. Math. Soc. (1999), 413–432.[BaDeJo99] J. Baik, P. Deift and K. Johansson, On the Distribution of the Length of the Longest IncreasingSubsequence of Random Permutations , J. Amer. Math. Soc., (1999), 1119–1178.[Bi01] P. Biane, Approximate factorization and concentration for characters of symmetric groups , In-ter. Math. Res. Notices (2001), no. 4, 179–192.[BjBr05] A. Bj ¨orner and F. Brenti,
Combinatorics of Coxeter Groups , Graduate Texts in Mathematics , Springer, New York, 2005.[BoOkOl00] A. Borodin, A. Okounkov and G. Olshanski,
Asymptotics of Plancherel measures for symmet-ric groups , J. Amer. Math. Soc., (2000), 481–515.[BoOl07] A. Borodin and G. Olshanski, Asymptotics of Plancherel-type random partitions , J. Algebra (2007), 40–60.[Bu02a] A. Buch,
A Littlewood-Richardson rule for the K -theory of Grassmannians , Acta Math., (2002), 37–78.[BuKrShTaYo06] A. Buch, A. Kresch, M. Shimozono, H. Tamvakis and A. Yong, Stable Grothendieck polyno-mials and K -theoretic factor sequences , Math. Ann., to appear, 2008. math.CO/0601514 .[DeZe02] A. Dembo and O. Zeitouni, Large deviations and applications , Handbook of stochastic anal-ysis and applications, 351–416, Statist. Textbooks Monogr., , Dekker, New York, 2002.[EdGr87] P. Edelman and C. Greene,
Balanced tableaux , Adv. Math. (1987), no. 1, 42–99.[ErSz35] P. Erd˝os and G. Szekeres, A combinatorial problem in geometry , Composito Math. (1935),463–470.[FoGr98] S. Fomin and C. Greene, Noncommutative Schur functions and their applications , DiscreteMath. (1998), no. 1-3, 179–200, Selected papers in honor of Adriano Garsia (Taormina,1994).[Gr74] C. Greene,
An extension of Schensted’s theorem , Adv. Math. (1974), 254–265.[Ham72] J. M. Hammersley, A few seedlings of research . In
Proc. Sixth Berkeley Symp. Math. Statist. andProbability, Volume 1 , pp. 345–394. University of California Press, 1972.[HoLi06] C. Houdr´e and T. Litherland,
On the longest increasing subsequence for finite and countablealphabets , preprint arXiv:math/0612364 .[Joh01] K. Johansson,
Discrete orthogonal polynomial ensembles and the Plancherel measure ,Ann. Math. (2) (2001), no. 1, 259–296.[Kn70] D. E. Knuth,
Permutations, matrices and generalized Young tableaux , Pacific J. Math. (1970),709–727.[LaSc82] A. Lascoux and M. -P. Sch ¨utzenberger, Structure de Hopf de l’anneau de cohomologie et del’anneau de Grothendieck d’une vari´et´e de drapeaux , C. R. Acad. Sci. Paris S´er. I Math. (1982), no. 11, 629–633.[LoSh77] B. F. Logan and L. A. Shepp,
A variational problem for random Young tableaux ,Adv. Math. (1977), 206–222.[Ma73] C. L. Mallows, Patience sorting , Bull. Inst. Math. Appl. (1973), 216–224.[Sc61] C. Schensted, Longest increasing and decreasing subsequences , Canad. J. Math. (1961), 179–191[St06] R. P. Stanley, Increasing and decreasing subsequences and their variants , Proceedings of theInternational Congress of Mathematicians, Madrid, Spain, 2006.[St99] ,
Enumerative Combinatorics, Volume 2 (with an appendix by S. Fomin), CambridgeUniversity Press, 1999.[St96] ,
Polygon dissections and standard Young tableaux , J. Combin. Theory ser. A (1996),175–177.[ThYo07] H. Thomas and A. Yong, A jeu de taquin theory for increasing tableau, with applications to K -theoretic Schubert calculus , preprint arXiv:math.CO/0705.2915 .[TrWi01] C. Tracy and H. Widom, On the distributions of the lengths of longest monotone subsequencesin random words , Probab. Theory Relat. Fields (2001), 350–380. VeKe77] A. M. Vershik and S. V. Kerov,
Asymptotic behavior of the Plancherel measure of the symmetricgroup and the limit form of Young tableaux , Dokl. Akad. Nauk SSSR (1977) 1024–1027;English translation: Soviet Math. Dokl. (1977), 527–531.[VeKe85] ,
Asymptotic behavior of the maximum and generic dimensions of irreducible representa-tions of the symmetric group , Funktsional. Anal. i Prilozhen. (1985), no. 1, 25–36.D EPARTMENT OF M ATHEMATICS AND S TATISTICS , U
NIVERSITY OF N EW B RUNSWICK , F
REDERICTON ,N EW B RUNSWICK , E3B 5A3, C
ANADA
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF M INNESOTA , M
INNEAPOLIS , MN 55455, USA
E-mail address : [email protected]