OOn the sum of ordered spacings
Lolian Shtembari and Allen CaldwellMax Planck Institute for Physics, MunichJuly 2020
Abstract
We provide the analytic forms of the distributions for the sum of ordered spacings. We dothis both for the case where the boundaries are included in the calculation of the spacingsand the case where they are excluded. Both the probability densities as well as theircumulatives are provided. These results will have useful applications in the physical sciencesand possibly elsewhere.
The use of spacings between ordered real-valued numbers is very useful in many areas of science.In particular, either unnaturally small or large spacings can be a signal of an interesting effect.As particle physicists, we are interested in the appearance of the unexpected clustering of values,indicating the presence of a new process, or large gaps between the ordered values, allowing usto set upper limits on the normalization of a distribution.Order statistics have been studied in great depth in the statistics community, but the workis poorly known in the physics community. This has led to the rediscovery of results long-knownto the statistics community. An example of the use of spacings between values in the particlephysics community is presented by Yellin [1], where the author proposes a method to set a limiton the interaction rate of putative dark matter particles using the size of gaps in the observedenergy spectrum of recorded interactions. In this context, a large gap in the energy spectrumimplies an upper limit on the interaction strength. The ‘Yellin method’ has been used by manygroups in reporting their results (see for example [2]).As is normally done when using order statistics, a known distribution with some quantitydescribed as a function of some variable is converted to the uniform distribution
U ∈ [0 ,
1] viathe cumulative of the distribution of interest. The powerful results that can be derived for orderstatistics on the unit interval can then be applied to the task at hand. The maximal spacingdistribution given by Yellin was previously known for at least 75 years [3] .In the following we will introduce some of the main concepts at the root of order statistics.We will then introduce new statistics formed from the sum of ordered spacings. We believethese statistics can be of use in scientific applications, and, as far as we know, have not beenpresented to date. For a more comprehensive and detailed review of the results that have beenachieved in the field of order statistics, we recommend the following books [4] [5].1 a r X i v : . [ m a t h . S T ] A ug Notation and Known Results
Let { X , X , ..., X N } be a sequence of iid random variables with a uniform distribution in therange [0 , k -th order statistic is defined as the k -th smallestvalue of the sample. This means that given the sequence { X , X , ..., X N } we obtain the orderstatistic { X (1) , X (2) , ..., X ( N ) } . X ( k ) := the k -th smallest of { X , X , ..., X N } for k = 1 , , ..., N (1)The distribution of these ordered samples is simply a Beta distribution: X ( k ) ∼ Beta ( k, N + 1 − k ) (2)and the joint distribution of two ordered values is: X ( k ) − X ( j ) ∼ Beta ( k − j, N + 1 − ( k − j )) (3)Given { X (1) , X (2) , ..., X ( N ) } we are interested in the the spacings between the ordered values.To begin with, we will consider an extended set of ordered values, namely the boundaries ofthe range of X ( i ) themselves: 0 and 1. We will define X (0) = 0 and X ( N +1) = 1. We define aspacing G i as the distance between neighboring values: G i = X ( i ) − X ( i − for i = 1 , , ..., N + 1 (4)The distribution of any of these is given by Eq. (2)): p ( G i = x ) = N (1 − x ) N − (5) p ( G i ≤ x ) = 1 − (1 − x ) N (6)Just as we ordered the set of initial values { X , X , ... , X N } in order to obtain { X (0) , X (1) , ..., X ( N +1) } , we can order the set { G , G , ... , G N +1 } and obtain { G (1) , G (2) , ... , G ( N +1) } whichcould be interpreted as an order statistics of the spacings.The distribution of the smallest spacing G (1) , due to R. Fisher [6], is known: p ( G (1) = x | N,
1) = N ( N + 1) (1 − ( N + 1) x ) N − (7)as is the general distribution of G ( k ) given N uniform samples in an interval of length 1 [7]: p ( G ( k ) = x | N,
1) = N ( N +1) (cid:18) Nk − (cid:19) k (cid:88) i =1 ( − k − i (cid:18) k − i − (cid:19) [1 − ( N + 2 − i ) x ] N − · H (cid:18) x, , N + 2 − i (cid:19) (8)where H ( x, a, b ) = 1 if a ≤ x ≤ b and 0 otherwise. Limits of this distributions have also been2valuated for different combinations of k and N [8, 9].Given this new set of ordered uniform spacings, an interesting quantity is the sum of the first k smallest or largest G ( i ) . We will derive the probability distributions for these quantities in thefollowing.We denote the sum of the first k minima as: s k = k (cid:88) i =1 G ( i ) (9)and the sum of the first k maxima as S k = k (cid:88) i =1 G ( N +2 − i ) (10) k = 1 If k = 1 then s k = G (1) , so the distribution of s is the same as that of the smallest spacing: p ( s = s ) = ( N + 1) N [1 − ( N + 1) s ] N − (11) p ( s ≤ s ) = 1 − [1 − ( N + 1) s ] N (12) k = 2 In order to get the distribution of s , it is useful to consider the joint distribution of ( G (1) , s ).Using the Law of Total Probability: p ( G (1) = x, s = s | N,
1) = p ( G (1) = x | N, · p ( s = s | N, , G (1) = x ) (13)In order to derive an expression for p ( s | G (1) ) we can consider that once we have chosen thelength on the smallest spacing, by definition all the other spacings need to be longer or equalto this minimum length. We can then proceed to subtract G (1) from the length of all the otherspacings: G ( i ) − G (1) = G ∗ ( i − , for i = 2 , ..., N + 1 (14)This operation leaves us with a reduced set of spacings (since subtracting G (1) from itself resultsin 0, we simply discard this element) where the reduced spacings are still sorted in increasingorder: { G (1) , ..., G ( N +1) } → { G ∗ (1) , ..., G ∗ ( N ) } (15)and they sum up to: N (cid:88) i =1 G ∗ ( i ) = 1 − ( N + 1) G (1) (16)3he set { G ∗ (1) , ... , G ∗ ( N ) } can be interpreted as ordered uniform spacings determined by sampling N − − ( N + 1) G (1) . Given this rearrangement, we can expressthe sum of k minima using this new set of spacings: s ∗ k − = k − (cid:88) i =1 G ∗ ( i ) = k (cid:88) i =1 ( G ( i ) − G (1) ) = s k − k · G (1) (17)This allows us to rewrite the conditional distribution of s as: p ( s = s | N, , G (1) = x ) = p ( s ∗ = s − x | N − , − ( N + 1) x )= (cid:18) − ( N + 1) x (cid:19) N ( N − (cid:20) − N ( s − x )1 − ( N + 1) x (cid:21) N − (18)Putting Eq. (11, 13, 18) together we obtain: p ( G (1) = x, s = s ) = ( N + 1) N ( N −
1) [1 − ( N + 1) x ] N − (cid:20) N − x − N s − ( N + 1) x (cid:21) N − = ( N + 1) N ( N −
1) [1 − ( N − x − N s ] N − (19)The support of s is (cid:104) , N +1 (cid:105) and the support of s ∗ with N − (cid:2) , N (cid:3) , thus the jointdistribution is bound withing a triangle as showed in Fig. 1. Marginalizing over G (1) we get thedistribution of s : GG ss (0, 0)(0, 0) (1/(N+1), 2/(N+1))(1/(N+1), 2/(N+1))(0, 1/N)(0, 1/N) Figure 1: Support of the joint distribution p ( M , s )4 ( s = s ) = (cid:90) s ( N + 1) N ( N −
1) [1 + ( N − x − N s ] N − dx for 0 ≤ s ≤ N = (cid:90) s Ns − N − ( N + 1) N ( N −
1) [1 + ( N − x − N s ] N − dx for 1 N ≤ s ≤ N + 1 = N ( N + 1)( N − (cid:32)(cid:20) − (cid:18) N + 12 (cid:19) s (cid:21) N − − [1 − N s ] N − (cid:33) for 0 ≤ s ≤ N = N ( N + 1)( N − (cid:20) − (cid:18) N + 12 (cid:19) s (cid:21) N − for 1 N ≤ s ≤ N + 1 (20) k So far we have explicitly derived the joint distributions for the sum of the first two smallestspacings. We are led to make an hypothesis regarding the general distribution of s k : p ( s k = s | N,
1) = A ( k, N ) k (cid:88) i =1 a ( i, k ) (cid:20) − (cid:18) N + 2 − ik + 1 − i (cid:19) s (cid:21) N − · H (cid:18) s, , k + 1 − iN + 2 − i (cid:19) (21)where the coefficients A ( k, N ) and a ( i, k ) are given by: A ( k, N ) = N ( N + 1)!( N + 1 − k ) k − ( N + 1 − k )! (22) a ( i, k ) = ( − i − ( k + 1 − i ) k − ( k − i )!( i − k smallest spacings and a proof by inductionis presented in the Appendix. The cumulative density function is: p ( s k ≤ s | N,
1) = A ( k, N ) k (cid:88) i =1 (cid:90) min ( s, k +1 − iN +2 − i ) a ( i, k ) (cid:20) − (cid:18) N + 2 − ik + 1 − i (cid:19) x (cid:21) N − dx = A ( k, N ) N k (cid:88) i =1 a ( i, k )( k + 1 − i )( N + 2 − i ) (cid:32) − (cid:20) − (cid:18) N + 2 − ik + 1 − i (cid:19) s (cid:21) N H (cid:18) s, , k + 1 − iN + 2 − i (cid:19)(cid:33) (24)The distributions that we have derived are plotted in Fig. 2 for different choices of N and k . Since the sum of all the spacings is 1, knowing the sum of the first k smallest spacings allowsus to know the value of the sum of the largest ( N + 1 − k ) spacings. I.e., we have S k = k (cid:88) i =1 G ( N +2 − i ) = 1 − N +1 − k (cid:88) i =1 G ( i ) = 1 − s N +1 − k (25)5 .0 0.2 0.4 0.6 0.8 1.001234567 pd f ( s k ) c d f ( s k ) Sum of smalles k uniform ordered spacingsN = 3, k = 2N = 5, k = 3 N = 8, k = 5N = 10, k = 9 N = 12, k = 7N = 14, k = 12
Figure 2: Left: probability distributions for s k , the sum of the k smallest ordered uniformspacings, for different combinations of N and k . Right: the cumulative probability distribuionfor s k for the same choices of N and k .which implies that: p ( S k = s ) = p ( s N +1 − k = 1 − s ) (26) p ( S k = s | N,
1) = A ( N +1 − k, N ) N +1 − k (cid:88) i =1 a ( i, N +1 − k ) (cid:20) s ( N + 2 − i ) − kN + 2 − k − i (cid:21) N − H (cid:18) s, kN + 2 − i , (cid:19) (27) p ( S k ≥ s | N,
1) = A ( N + 1 − k, N ) N N +1 − k (cid:88) i =1 a ( i, N + 1 − k )( N + 2 − k − i ) N + 2 − i ·· (cid:20) s ( N + 2 − i ) − kN + 2 − k − i (cid:21) N H (cid:18) s, kN + 2 − i , (cid:19) (28)where the coefficients A and a are the same as Eq. 22-23. The distributions that we have derivedin plotted in Fig. 3 for different choices of N and k . So far we have included the boundaries, 0 and 1, in the list of values X i , considering them asde-facto data. This means that in all previous derivations we considered an effective populationof N + 2 values, where X N +1 − X = 1 and the remaining N values determine the spacings.6 .0 0.2 0.4 0.6 0.8 1.0051015202530 pd f ( S k ) c d f ( S k ) Sum of largest k uniform ordered spacingsN = 3, k = 2N = 5, k = 3 N = 8, k = 5N = 10, k = 9 N = 12, k = 7N = 14, k = 12
Figure 3: Left: probability distributions for S k , the sum of the k largest ordered uniformspacings, for different combinations of N and k . Right: the cumulative probability distributionfor s k for the same choices of N and k .This approach introduces some possibly unwelcome artifacts in the analysis of spacings data.E.g., if we shift all the inner N values slightly towards one of the boundaries? If we were toperform this change, then { X (1) , ..., X ( N ) } → { X (1) ± (cid:15), ..., X ( N ) ± (cid:15) } = ⇒ G ± (cid:15), G N +1 ∓ (cid:15) (29)Depending on the application, it is possible that one is interested only in the spacings betweenthe inner N values, without considering how close this set is to either boundary. We thereforewould want the distributions that do not include the boundaries.In this scenario we will derive the spacing statistic of only the inner spacings { G , ..., G N } . Inthis new scenario, X (1) and X ( N ) become the new boundaries just as X (0) = 0 and X ( N +1) = 1were the boundaries in the previous case. { X (1) , ..., X ( N ) } → { X f (0) , ..., X f ( N − } → { G f , ..., G fN − } → { G f (1) , ..., G f ( N − } (30)This means that given N values in the no edge scenario we have an effective population of N − X ( N ) − X (1) = X f ( N − − X f (0) = µ .Given µ , we can reuse the same distribution of a quantity we have studied with the presenceof the boundaries decreasing the number of spacings from N + 1 to N − µ by means of the change of variable rule.Looking at µ we notice that it is none other than the spacing between the extremes of theordered values and its distribution is given by Eq. (3).Given N values, a quantity of interest A and its distribution with boundaries p w.b. ( A = x | N, µ ), in order to derive the distribution of A without boundaries, p n.b. ( A = x | N, µ : 7 n.b. ( A = x | N + 1 ,
1) = (cid:90) p ( µ ) · p w.b. ( A = x | N − , µ ) dµ = (cid:90) N ( N − µ N − (1 − µ ) · p w.b. (cid:18) A = xµ (cid:12)(cid:12)(cid:12)(cid:12) N − , (cid:19) dµ (31) k -th smallest spacing The distribution of the k -th ordered uniform spacing is given in Eq. 8. Using Eq. 31 we can getthe distribution of the k -th ordered uniform spacing without boundaries: p n.b. ( G ( k ) = x | N,
1) = ( − k N ( N − ( N − (cid:18) N − k − (cid:19) ·· k (cid:88) i =1 (cid:90) N − i ) x ( − i (cid:18) k − i − (cid:19) (1 − µ ) [ µ − ( N − i ) x ] N − dµ = ( − k N ( N − (cid:18) N − k − (cid:19) ·· k (cid:88) i =1 (cid:90) N − i ) x ( − i (cid:18) k − i − (cid:19) [ µ − ( N − i ) x ] N − dµ = ( − k N ( N − (cid:18) N − k − (cid:19) ·· k (cid:88) i =1 ( − i (cid:18) k − i − (cid:19) [1 − ( N − i ) x ] N − H (cid:18) x, , N − i (cid:19) (32) p n.b. ( G ( k ) ≤ x | N,
1) = ( − k ( N − (cid:18) N − k − (cid:19) ·· k (cid:88) i =1 ( − i ( N − i ) (cid:18) k − i − (cid:19) (cid:18) − [1 − ( N − i ) x ] N H (cid:18) x, , N − i (cid:19)(cid:19) (33)Examples of the resulting distributions are shown in Fig. 4. k smallest spacings For the sum of the first k uniform ordered spacings we get: p n.b. ( s k = s | N,
1) = N ( N − A ( k, N − k (cid:88) i =1 (cid:90) ( N − i ) sk +1 − i a ( i, k )(1 − µ ) (cid:20) µ − (cid:18) N − ik + 1 − i (cid:19) s (cid:21) N − dµ = N ( N − A ( k, N − k (cid:88) i =1 a ( i, k ) (cid:20) − (cid:18) N − ik + 1 − i (cid:19) s (cid:21) N − H (cid:18) s, , k + 1 − iN − i (cid:19) (34)8 .0 0.2 0.4 0.6 0.8 1.00.02.55.07.510.012.515.017.520.0 pd f n . b . ( G ( k ) ) c d f n . b . ( G ( k ) ) Smallest k uniform ordered spacing without boundariesN = 4, k = 2N = 6, k = 3 N = 8, k = 5N = 10, k = 7 N = 12, k = 10N = 13, k = 9
Figure 4: Left: probability distributions for G ( k ), the k th smallest spacing, for different combi-nations of N and k , for the case where the boundaries are not included in the definition of thespacings. Right: the cumulative probability distribuion for G ( k ) for the same choices of N and k . p n.b. ( s k ≤ s | N,
1) = A ( k, N − N − k (cid:88) i =1 a ( i, k )( k + 1 − i )( N − i ) (cid:32) − (cid:20) − (cid:18) N − ik + 1 − i (cid:19) s (cid:21) N H (cid:18) s, , k + 1 − iN − i (cid:19)(cid:33) (35)Our expressions are displayed in Fig. 5. k largest spacings For the sum of the last k uniform ordered spacings we get: p n.b. ( S k = s | N,
1) = N ( N − A ( N − − k, N − ·· N − − k (cid:88) i =1 (cid:90) min ( , s ( N − k ) s a ( i, N − − k )(1 − µ ) (cid:20) s ( N − i ) − µkN − k − i (cid:21) N − dµ = N A ( N − − k, N − k ( N − N − − k (cid:88) i =1 a ( i, N − − k )( N − k − i ) N − · ( s N − [ k ( N − − s ( N − i ) − ks ( N − N − i − k ] N − ++[ s ( N − i ) − k ] N − H (cid:18) s, kN − i , (cid:19)(cid:19) (36)9 .0 0.2 0.4 0.6 0.8 1.0012345 pd f n . b . ( s k ) c d f n . b . ( s k ) Sum of smalles k uniform ordered spacings without boundariesN = 4, k = 2N = 6, k = 3 N = 8, k = 5N = 10, k = 7 N = 12, k = 10N = 13, k = 9
Figure 5: Left: probability distributions for s k , the sum of the k smallest ordered uniformspacings, for different combinations of N and k for the case where the boundaries are notincluded in the definition of the spacings. Right: the cumulative probability distribuion for s k for the same choices of N and k . p n.b. ( S k ≤ s | N,
1) =
N A ( N − − k, N − k ( N − N − − k (cid:88) i =1 a ( i, N − − k )( N − k − i ) N − · ( s N − (cid:20) k − x [ N ( k + 1) − i − k ] N (cid:21) [ N − i − k ] N − ++ 1 N ( N − i ) [ s ( N − i ) − k ] N H (cid:18) s, kN − i , (cid:19)(cid:19) (37)Examples of these are displayed Fig. 6 We have derived the probability distributions of the sums of either the smallest or the largest k ordered uniform spacings. We did this both for the case where the boundaries are included inthe analysis and when only the observed values are used.These quantities can be very useful when analysing a sequence of results in a particle physicscontext. They can either give an indication for the presence of a unexpected source via aclustering of values, or they can be used to set an upper limit on the normalization of a spectrum.We note that in many experimental scenarios, the number of observed values is a randomvariable. The distributions we have derived can then be used convoluted with the expecteddistribution for the random number of observed values.We are currently developing novel test statistics for this purpose for use in particle physicsdata analysis. 10 .0 0.2 0.4 0.6 0.8 1.0012345 pd f n . b . ( S k ) c d f n . b . ( S k ) Sum of largest k uniform ordered spacings without boundariesN = 4, k = 2N = 6, k = 3 N = 8, k = 5N = 10, k = 7 N = 12, k = 10N = 13, k = 9
Figure 6: Left: probability distributions for S k , the sum of the k largest ordered uniformspacings, for different coombinations of N and k for the case where the boundaries are notincluded in the definition of the spacings. Right: the cumulative probability distribuion for s k for the same choices of N and k . References [1] S. Yellin. “Finding an upper limit in the presence of an unknown background”. In:
PhysicalReview D
Phys. Rev.
D100.10 (2019), p. 102002. doi : .arXiv: .[3] R. Pincus. “Distribution of the Maximal Gap in a Sample and its Application for OutlierDetection”. In: Rasch D., Tiku M.L. (eds) Robustness of Statistical Methods and Nonpara-metric Statistics. Theory and Decision Library
Order statistics . Wiley, 2003.[5] N. Balakrishnan B. C. Arnold and H. N. Nagaraja.
A First Course in Order Statistics .Society for Industrial and Applied Mathematics, 2008.[6] R. A. Fisher. “Tests of significance in harmonic analysis”. In:
Proc. R. Soc. Lond.
A 125(1929), pp. 54–59.[7] W. Feller.
An Introduction to Probability Theory and Its Applications . Vol. 2. Wiley, 1966.[8] L. Holst. “On the Lengths of the Pieces of a Stick Broken at Random”. In:
Journal ofApplied Probability
Statistical Papers ppendix
In the previous sections we have shown that Eq.21 is valid for s and s . We now show itsvalidity going from k − k via induction. We start from the joint distribution of s k and G (1) : p ( s k = s, G (1) = x | N,
1) = p ( G (1) = x | N, p ( s ∗ k − = s − kx | N − , − ( N + 1) x )= p ( G (1) = x | N, (cid:18) − ( N + 1) x (cid:19) p (cid:18) s ∗ k − = s − kx − ( N + 1) x | N − , (cid:19) = N ( N + 1) [1 − ( N + 1) x ] N − A ( k − , N − · (cid:32) k − (cid:88) i =1 a ( i, k − ·· (cid:20) − (cid:18) N + 1 − ik − i (cid:19) s − kx − ( N + 1) x (cid:21) N − · H (cid:18) s − kx − ( N + 1) x , , k − iN + 1 − i (cid:19)(cid:33) = N ( N + 1) A ( k − , N − · (cid:32) k − (cid:88) i =1 a ( i, k − ·· (cid:20) x · i ( N + 1 − k ) k − i − s · ( N + 1 − i ) k − i (cid:21) N − · H (cid:18) s − kx − ( N + 1) x , , k − iN + 1 − i (cid:19)(cid:33) (38)Marginalizing over M we have: p ( s k = s | N,
1) = (cid:90) sk p ( s k = s, M = x | N, N ( N + 1) A ( k − , N − ·· k − (cid:88) i =1 (cid:90) sk max ( , s ( N +1 − i ) − k + ii ( N +1 − k ) ) a ( i, k − (cid:20) x · i ( N + 1 − k ) k − i − s · ( N + 1 − i ) k − i (cid:21) N − = N ( N + 1)( N − N + 1 − k ) A ( k − , N − ·· k − (cid:88) i =1 a ( i, k − · ( k − i ) i · (cid:32)(cid:20) x · i ( N + 1 − k ) k − i − s · ( N + 1 − i ) k − i (cid:21) N − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) sk max ( , s ( N +1 − i ) − k + ii ( N +1 − k ) ) = N ( N + 1)( N − N + 1 − k ) A ( k − , N − · k − (cid:88) i =1 a ( i, k − · ( k − i ) i · (cid:32)(cid:20) − s · ( N + 1) k (cid:21) N − −− (cid:20) − s · ( N + 1 − i ) k − i (cid:21) N − H (cid:18) s, , k − iN + 1 − i (cid:19)(cid:33) = N ( N + 1)( N − N + 1 − k ) A ( k − , N − · (cid:32)(cid:32) k − (cid:88) i =1 a ( i, k − · ( k − i ) i (cid:33) · (cid:20) − s · ( N + 1) k (cid:21) N − −− k (cid:88) i =2 a ( i − , k − · ( k + 1 − i ) i − · (cid:20) − s · ( N + 2 − i ) k + 1 − i (cid:21) N − · H (cid:18) s, , k + 1 − iN + 2 − i (cid:19)(cid:33) (39)Looking back at Eq. 22 notice that: 12 ( N + 1)( N − N + 1 − k ) A ( k − , N −
1) = N ( N + 1)( N − N + 1 − k ) · ( N − N !( N − k − k − ( N − k − N ( N + 1)!( N + 1 − k ) k − ( N + 1 − k )!= A ( k, N ) (40) − a ( i − , k − · ( k + 1 − i ) i − − ( − i − ( k + 1 − i ) k − ( k − i )!( i − · ( k + 1 − i ) i −
1= ( − i − ( k + 1 − i ) k − ( k − i )!( i − a ( i, k ) for 2 ≤ i ≤ k (41)The result of Eq. 41 implies a recursion formula for the coefficients a ( i, k ) = f [ a ( i − , k − a ( i, k ) to a (1 , k + 1 − i ): a ( i, k ) = ( − i − ( k + 1 − i ) i − a (1 , k + 1 − i )( i − k − (cid:88) i =1 a ( i, k − · ( k − i ) i = k − (cid:88) i =1 ( − i − ( k − i ) i − a (1 , k − i )( i − · ( k − i ) i = − k − (cid:88) i =1 ( − i ( k − i ) i a (1 , k − i ) i ! (43)In order for Eq. 39 to satisfy our hypothesis, we need that: k − (cid:88) i =1 a ( i, k − · ( k − i ) i = a (1 , k ) (44)Putting together Eq. 43 and Eq. 44 we find a recursion rule for the coefficients of the coefficients a (1 , k ). Using this recursion we get: − k − (cid:88) i =1 ( − i ( k − i ) i a (1 , k − i ) i ! = − k − (cid:88) i =1 ik ( i + 1) · ( − i ( k − − i ) i a (1 , k − − i ) i != − k − m (cid:88) i =1 i · k m − ( − i ( k − m + 1 − i ) i a (1 , k − m + 1 − i )( m − i + m − i ! for 1 ≤ m ≤ k − = k k − ( k − · a (1 ,
1) = a (1 , k, k